title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. OS/JVM certifications | Chapter 5. OS/JVM certifications This release is supported for use with the following operating system and Java Development Kit (JDK) versions: Operating System Chipset Architecture Java Virtual Machine Red Hat Enterprise Linux 9 x86_64 Red Hat OpenJDK 11, Red Hat OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Red Hat Enterprise Linux 8 x86_64 Red Hat OpenJDK 11, Red Hat OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Microsoft Windows 2019 Server x86_64 Red Hat OpenJDK 11, Red Hat OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Note Red Hat Enterprise Linux 7 and Microsoft Windows 2016 Server are not supported. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/os_jvm |
Chapter 2. Using OpenID Connect to secure applications and services | Chapter 2. Using OpenID Connect to secure applications and services This section describes how you can secure applications and services with OpenID Connect using Red Hat build of Keycloak. 2.1. Available Endpoints As a fully-compliant OpenID Connect Provider implementation, Red Hat build of Keycloak exposes a set of endpoints that applications and services can use to authenticate and authorize their users. This section describes some of the key endpoints that your application and service should use when interacting with Red Hat build of Keycloak. 2.1.1. Endpoints The most important endpoint to understand is the well-known configuration endpoint. It lists endpoints and other configuration options relevant to the OpenID Connect implementation in Red Hat build of Keycloak. The endpoint is: To obtain the full URL, add the base URL for Red Hat build of Keycloak and replace {realm-name} with the name of your realm. For example: http://localhost:8080/realms/master/.well-known/openid-configuration Some RP libraries retrieve all required endpoints from this endpoint, but for others you might need to list the endpoints individually. 2.1.1.1. Authorization endpoint The authorization endpoint performs authentication of the end-user. This authentication is done by redirecting the user agent to this endpoint. For more details see the Authorization Endpoint section in the OpenID Connect specification. 2.1.1.2. Token endpoint The token endpoint is used to obtain tokens. Tokens can either be obtained by exchanging an authorization code or by supplying credentials directly depending on what flow is used. The token endpoint is also used to obtain new access tokens when they expire. For more details, see the Token Endpoint section in the OpenID Connect specification. 2.1.1.3. Userinfo endpoint The userinfo endpoint returns standard claims about the authenticated user; this endpoint is protected by a bearer token. For more details, see the Userinfo Endpoint section in the OpenID Connect specification. 2.1.1.4. Logout endpoint The logout endpoint logs out the authenticated user. The user agent can be redirected to the endpoint, which causes the active user session to be logged out. The user agent is then redirected back to the application. The endpoint can also be invoked directly by the application. To invoke this endpoint directly, the refresh token needs to be included as well as the credentials required to authenticate the client. 2.1.1.5. Certificate endpoint The certificate endpoint returns the public keys enabled by the realm, encoded as a JSON Web Key (JWK). Depending on the realm settings, one or more keys can be enabled for verifying tokens. For more information, see the Server Administration Guide and the JSON Web Key specification . 2.1.1.6. Introspection endpoint The introspection endpoint is used to retrieve the active state of a token. In other words, you can use it to validate an access or refresh token. This endpoint can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Introspection specification . 2.1.1.7. Dynamic Client Registration endpoint The dynamic client registration endpoint is used to dynamically register clients. For more details, see the Client Registration chapter and the OpenID Connect Dynamic Client Registration specification . 2.1.1.8. Token Revocation endpoint The token revocation endpoint is used to revoke tokens. Both refresh tokens and access tokens are supported by this endpoint. When revoking a refresh token, the user consent for the corresponding client is also revoked. For more details on how to invoke on this endpoint, see OAuth 2.0 Token Revocation specification . 2.1.1.9. Device Authorization endpoint The device authorization endpoint is used to obtain a device code and a user code. It can be invoked by confidential or public clients. For more details on how to invoke on this endpoint, see OAuth 2.0 Device Authorization Grant specification . 2.1.1.10. Backchannel Authentication endpoint The backchannel authentication endpoint is used to obtain an auth_req_id that identifies the authentication request made by the client. It can only be invoked by confidential clients. For more details on how to invoke on this endpoint, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation like Client Initiated Backchannel Authentication Grant section of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. 2.2. Supported Grant Types This section describes the different grant types available to relaying parties. 2.2.1. Authorization code The Authorization Code flow redirects the user agent to Red Hat build of Keycloak. Once the user has successfully authenticated with Red Hat build of Keycloak, an Authorization Code is created and the user agent is redirected back to the application. The application then uses the authorization code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. The flow is targeted towards web applications, but is also recommended for native applications, including mobile applications, where it is possible to embed a user agent. For more details refer to the Authorization Code Flow in the OpenID Connect specification. 2.2.2. Implicit The Implicit flow works similarly to the Authorization Code flow, but instead of returning an Authorization Code, the Access Token and ID Token is returned. This approach reduces the need for the extra invocation to exchange the Authorization Code for an Access Token. However, it does not include a Refresh Token. This results in the need to permit Access Tokens with a long expiration; however, that approach is not practical because it is very hard to invalidate these tokens. Alternatively, you can require a new redirect to obtain a new Access Token once the initial Access Token has expired. The Implicit flow is useful if the application only wants to authenticate the user and deals with logout itself. You can instead use a Hybrid flow where both the Access Token and an Authorization Code are returned. One thing to note is that both the Implicit flow and Hybrid flow have potential security risks as the Access Token may be leaked through web server logs and browser history. You can somewhat mitigate this problem by using short expiration for Access Tokens. For more details, see the Implicit Flow in the OpenID Connect specification. Per current OAuth 2.0 Security Best Current Practice , this flow should not be used. This flow is removed from the future OAuth 2.1 specification . 2.2.3. Resource Owner Password Credentials Resource Owner Password Credentials, referred to as Direct Grant in Red Hat build of Keycloak, allows exchanging user credentials for tokens. Per current OAuth 2.0 Security Best Practices , this flow should not be used, preferring alternative methods such as Section 2.2.5, "Device Authorization Grant" or Section 2.2.1, "Authorization code" . The limitations of using this flow include: User credentials are exposed to the application Applications need login pages Application needs to be aware of the authentication scheme Changes to authentication flow requires changes to application No support for identity brokering or social login Flows are not supported (user self-registration, required actions, and so on.) Security concerns with this flow include: Involving more than Red Hat build of Keycloak in handling of credentials Increased vulnerable surface area where credential leaks can happen Creating an ecosystem where users trust another application for entering their credentials and not Red Hat build of Keycloak For a client to be permitted to use the Resource Owner Password Credentials grant, the client has to have the Direct Access Grants Enabled option enabled. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. It is removed from the future OAuth 2.1 specification . For more details, see the Resource Owner Password Credentials Grant chapter in the OAuth 2.0 specification. 2.2.3.1. Example using CURL The following example shows how to obtain an access token for a user in the realm master with username user and password password . The example is using the confidential client myclient : curl \ -d "client_id=myclient" \ -d "client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578" \ -d "username=user" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2.4. Client credentials Client Credentials are used when clients (applications and services) want to obtain access on behalf of themselves rather than on behalf of a user. For example, these credentials can be useful for background services that apply changes to the system in general rather than for a specific user. Red Hat build of Keycloak provides support for clients to authenticate either with a secret or with public/private keys. This flow is not included in OpenID Connect, but is a part of the OAuth 2.0 specification. For more details, see the Client Credentials Grant chapter in the OAuth 2.0 specification. 2.2.5. Device Authorization Grant Device Authorization Grant is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. The application requests that Red Hat build of Keycloak provide a device code and a user code. Red Hat build of Keycloak creates a device code and a user code. Red Hat build of Keycloak returns a response including the device code and the user code to the application. The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. The application repeatedly polls Red Hat build of Keycloak until Red Hat build of Keycloak completes the user authorization. If user authentication is complete, the application obtains the device code. The application uses the device code along with its credentials to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak. For more details, see the OAuth 2.0 Device Authorization Grant specification . 2.2.6. Client Initiated Backchannel Authentication Grant Client Initiated Backchannel Authentication Grant is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. The client requests from Red Hat build of Keycloak an auth_req_id that identifies the authentication request made by the client. Red Hat build of Keycloak creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat build of Keycloak to obtain an Access Token, Refresh Token, and ID Token from Red Hat build of Keycloak in return for the auth_req_id until the user is authenticated. In case that client uses ping mode, it does not need to repeatedly poll the token endpoint, but it can wait for the notification sent by Red Hat build of Keycloak to the specified Client Notification Endpoint. The Client Notification Endpoint can be configured in the Red Hat build of Keycloak Admin Console. The details of the contract for Client Notification Endpoint are described in the CIBA specification. For more details, see OpenID Connect Client Initiated Backchannel Authentication Flow specification . Also refer to other places of Red Hat build of Keycloak documentation such as Backchannel Authentication Endpoint of this guide and Client Initiated Backchannel Authentication Grant section of Server Administration Guide. For the details about FAPI CIBA compliance, see the FAPI section of this guide . 2.3. Red Hat build of Keycloak Java adapters 2.3.1. Red Hat JBoss Enterprise Application Platform Red Hat build of Keycloak does not include any adapters for Red Hat JBoss Enterprise Application Platform. However, there are alternatives for existing applications deployed to Red Hat JBoss Enterprise Application Platform. 2.3.1.1. 8.0 Beta Red Hat Enterprise Application Platform 8.0 Beta provides a native OpenID Connect client through the Elytron OIDC client subsystem. For more information, see the Red Hat JBoss Enterprise Application Platform documentation . 2.3.1.2. 6.4 and 7.x Existing applications deployed to Red Hat JBoss Enterprise Application Platform 6.4 and 7.x can leverage adapters from Red Hat Single Sign-On 7.6 in combination with the Red Hat build of Keycloak server. For more information, see the Red Hat Single Sign-On documentation . 2.3.2. Spring Boot adapter Red Hat build of Keycloak does not include any adapters for Spring Boot. However, there are alternatives for existing applications built with Spring Boot. Spring Security provides comprehensive support for OAuth 2 and OpenID Connect. For more information, see the Spring Security documentation . Alternatively, for Spring Boot 2.x the Spring Boot adapter from Red Hat Single Sign-On 7.6 can be used in combination with the Red Hat build of Keycloak server. For more information, see the Red Hat Single Sign-On documentation . 2.4. Red Hat build of Keycloak JavaScript adapter Red Hat build of Keycloak comes with a client-side JavaScript library called keycloak-js that can be used to secure web applications. The adapter also comes with built-in support for Cordova applications. 2.4.1. Installation The adapter is distributed in several ways, but we recommend that you install the keycloak-js package from NPM: npm install keycloak-js Alternatively, the library can be retrieved directly from the Red Hat build of Keycloak server at /js/keycloak.js and is also distributed as a ZIP archive. We are however considering the inclusion of the adapter directly from the Keycloak server as deprecated, and this functionality might be removed in the future. 2.4.2. Red Hat build of Keycloak server configuration One important thing to consider about using client-side applications is that the client has to be a public client as there is no secure way to store client credentials in a client-side application. This consideration makes it very important to make sure the redirect URIs you have configured for the client are correct and as specific as possible. To use the adapter, create a client for your application in the Red Hat build of Keycloak Admin Console. Make the client public by toggling Client authentication to Off on the Capability config page. You also need to configure Valid Redirect URIs and Web Origins . Be as specific as possible as failing to do so may result in a security vulnerability. 2.4.3. Using the adapter The following example shows how to initialize the adapter. Make sure that you replace the options passed to the Keycloak constructor with those of the client you have configured. import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD{kc_base_path}', realm: 'myrealm', clientId: 'myapp' }); try { const authenticated = await keycloak.init(); console.log(`User is USD{authenticated ? 'authenticated' : 'not authenticated'}`); } catch (error) { console.error('Failed to initialize adapter:', error); } To authenticate, you call the login function. Two options exist to make the adapter automatically authenticate. You can pass login-required or check-sso to the init() function. login-required authenticates the client if the user is logged in to Red Hat build of Keycloak or displays the login page if the user is not logged in. check-sso only authenticates the client if the user is already logged in. If the user is not logged in, the browser is redirected back to the application and remains unauthenticated. You can configure a silent check-sso option. With this feature enabled, your browser will not perform a full redirect to the Red Hat build of Keycloak server and back to your application, but this action will be performed in a hidden iframe. Therefore, your application resources are only loaded and parsed once by the browser, namely when the application is initialized and not again after the redirect back from Red Hat build of Keycloak to your application. This approach is particularly useful in case of SPAs (Single Page Applications). To enable the silent check-sso , you provide a silentCheckSsoRedirectUri attribute in the init method. Make sure this URI is a valid endpoint in the application; it must be configured as a valid redirect for the client in the Red Hat build of Keycloak Admin Console: keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` }); The page at the silent check-sso redirect uri is loaded in the iframe after successfully checking your authentication state and retrieving the tokens from the Red Hat build of Keycloak server. It has no other task than sending the received tokens to the main application and should only look like this: <!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html> Remember that this page must be served by your application at the specified location in silentCheckSsoRedirectUri and is not part of the adapter. Warning Silent check-sso functionality is limited in some modern browsers. Please see the Modern Browsers with Tracking Protection Section . To enable login-required set onLoad to login-required and pass to the init method: keycloak.init({ onLoad: 'login-required' }); After the user is authenticated the application can make requests to RESTful services secured by Red Hat build of Keycloak by including the bearer token in the Authorization header. For example: async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); } One thing to keep in mind is that the access token by default has a short life expiration so you may need to refresh the access token prior to sending the request. You refresh this token by calling the updateToken() method. This method returns a Promise, which makes it easy to invoke the service only if the token was successfully refreshed and displays an error to the user if it was not refreshed. For example: try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers(); Note Both access and refresh token are stored in memory and are not persisted in any kind of storage. Therefore, these tokens should never be persisted to prevent hijacking attacks. 2.4.4. Session Status iframe By default, the adapter creates a hidden iframe that is used to detect if a Single-Sign Out has occurred. This iframe does not require any network traffic. Instead the status is retrieved by looking at a special status cookie. This feature can be disabled by setting checkLoginIframe: false in the options passed to the init() method. You should not rely on looking at this cookie directly. Its format can change and it's also associated with the URL of the Red Hat build of Keycloak server, not your application. Warning Session Status iframe functionality is limited in some modern browsers. Please see Modern Browsers with Tracking Protection Section . 2.4.5. Implicit and hybrid flow By default, the adapter uses the Authorization Code flow. With this flow, the Red Hat build of Keycloak server returns an authorization code, not an authentication token, to the application. The JavaScript adapter exchanges the code for an access token and a refresh token after the browser is redirected back to the application. Red Hat build of Keycloak also supports the Implicit flow where an access token is sent immediately after successful authentication with Red Hat build of Keycloak. This flow may have better performance than the standard flow because no additional request exists to exchange the code for tokens, but it has implications when the access token expires. However, sending the access token in the URL fragment can be a security vulnerability. For example the token could be leaked through web server logs and or browser history. To enable implicit flow, you enable the Implicit Flow Enabled flag for the client in the Red Hat build of Keycloak Admin Console. You also pass the parameter flow with the value implicit to init method: keycloak.init({ flow: 'implicit' }) Note that only an access token is provided and no refresh token exists. This situation means that once the access token has expired, the application has to redirect to Red Hat build of Keycloak again to obtain a new access token. Red Hat build of Keycloak also supports the Hybrid flow. This flow requires the client to have both the Standard Flow and Implicit Flow enabled in the Admin Console. The Red Hat build of Keycloak server then sends both the code and tokens to your application. The access token can be used immediately while the code can be exchanged for access and refresh tokens. Similar to the implicit flow, the hybrid flow is good for performance because the access token is available immediately. But, the token is still sent in the URL, and the security vulnerability mentioned earlier may still apply. One advantage in the Hybrid flow is that the refresh token is made available to the application. For the Hybrid flow, you need to pass the parameter flow with value hybrid to the init method: keycloak.init({ flow: 'hybrid' }); 2.4.6. Hybrid Apps with Cordova Red Hat build of Keycloak supports hybrid mobile apps developed with Apache Cordova . The adapter has two modes for this: cordova and cordova-native : The default is cordova , which the adapter automatically selects if no adapter type has been explicitly configured and window.cordova is present. When logging in, it opens an InApp Browser that lets the user interact with Red Hat build of Keycloak and afterwards returns to the app by redirecting to http://localhost . Because of this behavior, you whitelist this URL as a valid redirect-uri in the client configuration section of the Admin Console. While this mode is easy to set up, it also has some disadvantages: The InApp-Browser is a browser embedded in the app and is not the phone's default browser. Therefore it will have different settings and stored credentials will not be available. The InApp-Browser might also be slower, especially when rendering more complex themes. There are security concerns to consider, before using this mode, such as that it is possible for the app to gain access to the credentials of the user, as it has full control of the browser rendering the login page, so do not allow its use in apps you do not trust. The alternative mode is`cordova-native`, which takes a different approach. It opens the login page using the system's browser. After the user has authenticated, the browser redirects back into the application using a special URL. From there, the Red Hat build of Keycloak adapter can finish the login by reading the code or token from the URL. You can activate the native mode by passing the adapter type cordova-native to the init() method: keycloak.init({ adapter: 'cordova-native' }); This adapter requires two additional plugins: cordova-plugin-browsertab : allows the app to open webpages in the system's browser cordova-plugin-deeplinks : allow the browser to redirect back to your app by special URLs The technical details for linking to an app differ on each platform and special setup is needed. Please refer to the Android and iOS sections of the deeplinks plugin documentation for further instructions. Different kinds of links exist for opening apps: * custom schemes, such as myapp://login or android-app://com.example.myapp/https/example.com/login * Universal Links (iOS) / Deep Links (Android) . While the former are easier to set up and tend to work more reliably, the latter offer extra security because they are unique and only the owner of a domain can register them. Custom-URLs are deprecated on iOS. For best reliability, we recommend that you use universal links combined with a fallback site that uses a custom-url link. Furthermore, we recommend the following steps to improve compatibility with the adapter: Universal Links on iOS seem to work more reliably with response-mode set to query To prevent Android from opening a new instance of your app on redirect add the following snippet to config.xml : <preference name="AndroidLaunchMode" value="singleTask" /> 2.4.7. Custom Adapters In some situations, you may need to run the adapter in environments that are not supported by default, such as Capacitor. To use the JavasScript client in these environments, you can pass a custom adapter. For example, a third-party library could provide such an adapter to make it possible to reliably run the adapter: import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, }); This specific package does not exist, but it gives a pretty good example of how such an adapter could be passed into the client. It's also possible to make your own adapter, to do so you will have to implement the methods described in the KeycloakAdapter interface. For example the following TypeScript code ensures that all the methods are properly implemented: import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here... }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, }); Naturally you can also do this without TypeScript by omitting the type information, but ensuring implementing the interface properly will then be left entirely up to you. 2.4.8. Modern Browsers with Tracking Protection In the latest versions of some browsers, various cookies policies are applied to prevent tracking of the users by third parties, such as SameSite in Chrome or completely blocked third-party cookies. Those policies are likely to become more restrictive and adopted by other browsers over time. Eventually cookies in third-party contexts may become completely unsupported and blocked by the browsers. As a result, the affected adapter features might ultimately be deprecated. The adapter relies on third-party cookies for Session Status iframe, silent check-sso and partially also for regular (non-silent) check-sso . Those features have limited functionality or are completely disabled based on how restrictive the browser is regarding cookies. The adapter tries to detect this setting and reacts accordingly. 2.4.8.1. Browsers with "SameSite=Lax by Default" Policy All features are supported if SSL / TLS connection is configured on the Red Hat build of Keycloak side as well as on the application side. For example, Chrome is affected starting with version 84. 2.4.8.2. Browsers with Blocked Third-Party Cookies Session Status iframe is not supported and is automatically disabled if such browser behavior is detected by the adapter. This means the adapter cannot use a session cookie for Single Sign-Out detection and must rely purely on tokens. As a result, when a user logs out in another window, the application using the adapter will not be logged out until the application tries to refresh the Access Token. Therefore, consider setting the Access Token Lifespan to a relatively short time, so that the logout is detected as soon as possible. For more details, see Session and Token Timeouts . Silent check-sso is not supported and falls back to regular (non-silent) check-sso by default. This behavior can be changed by setting silentCheckSsoFallback: false in the options passed to the init method. In this case, check-sso will be completely disabled if restrictive browser behavior is detected. Regular check-sso is affected as well. Since Session Status iframe is unsupported, an additional redirect to Red Hat build of Keycloak has to be made when the adapter is initialized to check the user's login status. This check is different from the standard behavior when the iframe is used to tell whether the user is logged in, and the redirect is performed only when the user is logged out. An affected browser is for example Safari starting with version 13.1. 2.4.9. API Reference 2.4.9.1. Constructor new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost', realm: 'myrealm', clientId: 'myApp' }); 2.4.9.2. Properties authenticated Is true if the user is authenticated, false otherwise. token The base64 encoded token that can be sent in the Authorization header in requests to services. tokenParsed The parsed token as a JavaScript object. subject The user id. idToken The base64 encoded ID token. idTokenParsed The parsed id token as a JavaScript object. realmAccess The realm roles associated with the token. resourceAccess The resource roles associated with the token. refreshToken The base64 encoded refresh token that can be used to retrieve a new token. refreshTokenParsed The parsed refresh token as a JavaScript object. timeSkew The estimated time difference between the browser time and the Red Hat build of Keycloak server in seconds. This value is just an estimation, but is accurate enough when determining if a token is expired or not. responseMode Response mode passed in init (default value is fragment). flow Flow passed in init. adapter Allows you to override the way that redirects and other browser-related functions will be handled by the library. Available options: "default" - the library uses the browser api for redirects (this is the default) "cordova" - the library will try to use the InAppBrowser cordova plugin to load keycloak login/registration pages (this is used automatically when the library is working in a cordova ecosystem) "cordova-native" - the library tries to open the login and registration page using the phone's system browser using the BrowserTabs cordova plugin. This requires extra setup for redirecting back to the app (see Section 2.4.6, "Hybrid Apps with Cordova" ). "custom" - allows you to implement a custom adapter (only for advanced use cases) responseType Response type sent to Red Hat build of Keycloak with login requests. This is determined based on the flow value used during initialization, but can be overridden by setting this value. 2.4.9.3. Methods init(options) Called to initialize the adapter. Options is an Object, where: useNonce - Adds a cryptographic nonce to verify that the authentication response matches the request (default is true ). onLoad - Specifies an action to do on load. Supported values are login-required or check-sso . silentCheckSsoRedirectUri - Set the redirect uri for silent authentication check if onLoad is set to 'check-sso'. silentCheckSsoFallback - Enables fall back to regular check-sso when silent check-sso is not supported by the browser (default is true ). token - Set an initial value for the token. refreshToken - Set an initial value for the refresh token. idToken - Set an initial value for the id token (only together with token or refreshToken). scope - Set the default scope parameter to the Red Hat build of Keycloak login endpoint. Use a space-delimited list of scopes. Those typically reference Client scopes defined on a particular client. Note that the scope openid will always be added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat build of Keycloak will contain the scope parameter scope=openid address phone . Note that the default scope specified here is overwritten if the login() options specify scope explicitly. timeSkew - Set an initial value for skew between local time and Red Hat build of Keycloak server in seconds (only together with token or refreshToken). checkLoginIframe - Set to enable/disable monitoring login state (default is true ). checkLoginIframeInterval - Set the interval to check login state (default is 5 seconds). responseMode - Set the OpenID Connect response mode send to Red Hat build of Keycloak server at login request. Valid values are query or fragment . Default value is fragment , which means that after successful authentication will Red Hat build of Keycloak redirect to JavaScript application with OpenID Connect parameters added in URL fragment. This is generally safer and recommended over query . flow - Set the OpenID Connect flow. Valid values are standard , implicit or hybrid . enableLogging - Enables logging messages from Keycloak to the console (default is false ). pkceMethod - The method for Proof Key Code Exchange ( PKCE ) to use. Configuring this value enables the PKCE mechanism. Available options: "S256" - The SHA256 based PKCE method (default) false - PKCE is disabled. acrValues - Generates the acr_values parameter which refers to authentication context class reference and allows clients to declare the required assurance level requirements, e.g. authentication mechanisms. See Section 4. acr_values request values and level of assurance in OpenID Connect MODRNA Authentication Profile 1.0 . messageReceiveTimeout - Set a timeout in milliseconds for waiting for message responses from the Keycloak server. This is used, for example, when waiting for a message during 3rd party cookies check. The default value is 10000. locale - When onLoad is 'login-required', sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . Returns a promise that resolves when initialization completes. login(options) Redirects to login form. Options is an optional Object, where: redirectUri - Specifies the uri to redirect to after login. prompt - This parameter allows to slightly customize the login flow on the Red Hat build of Keycloak server side. For example, enforce displaying the login screen in case of value login . maxAge - Used just if user is already authenticated. Specifies maximum time since the authentication of user happened. If user is already authenticated for longer time than maxAge , the SSO is ignored and he will need to re-authenticate again. loginHint - Used to pre-fill the username/email field on the login form. scope - Override the scope configured in init with a different value for this specific login. idpHint - Used to tell Red Hat build of Keycloak to skip showing the login page and automatically redirect to the specified identity provider instead. More info in the Identity Provider documentation . acr - Contains the information about acr claim, which will be sent inside claims parameter to the Red Hat build of Keycloak server. Typical usage is for step-up authentication. Example of use { values: ["silver", "gold"], essential: true } . See OpenID Connect specification and Step-up authentication documentation for more details. acrValues - Generates the acr_values parameter which refers to authentication context class reference and allows clients to declare the required assurance level requirements, e.g. authentication mechanisms. See Section 4. acr_values request values and level of assurance in OpenID Connect MODRNA Authentication Profile 1.0 . action - If the value is register , the user is redirected to the registration page. See Registration requested by client section for more details. If the value is UPDATE_PASSWORD or another supported required action, the user will be redirected to the reset password page or the other required action page. However, if the user is not authenticated, the user will be sent to the login page and redirected after authentication. See Application Initiated Action section for more details. locale - Sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . cordovaOptions - Specifies the arguments that are passed to the Cordova in-app-browser (if applicable). Options hidden and location are not affected by these arguments. All available options are defined at https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-inappbrowser/ . Example of use: { zoom: "no", hardwareback: "yes" } ; createLoginUrl(options) Returns the URL to login form. Options is an optional Object, which supports same options as the function login . logout(options) Redirects to logout. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. createLogoutUrl(options) Returns the URL to log out the user. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. register(options) Redirects to registration form. Shortcut for login with option action = 'register' Options are same as for the login method but 'action' is set to 'register' createRegisterUrl(options) Returns the url to registration page. Shortcut for createLoginUrl with option action = 'register' Options are same as for the createLoginUrl method but 'action' is set to 'register' accountManagement() Redirects to the Account Console. createAccountUrl(options) Returns the URL to the Account Console. Options is an Object, where: redirectUri - Specifies the uri to redirect to when redirecting back to the application. hasRealmRole(role) Returns true if the token has the given realm role. hasResourceRole(role, resource) Returns true if the token has the given role for the resource (resource is optional, if not specified clientId is used). loadUserProfile() Loads the users profile. Returns a promise that resolves with the profile. For example: try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); } isTokenExpired(minValidity) Returns true if the token has less than minValidity seconds left before it expires (minValidity is optional, if not specified 0 is used). updateToken(minValidity) If the token expires within minValidity seconds (minValidity is optional, if not specified 5 is used) the token is refreshed. If -1 is passed as the minValidity, the token will be forcibly refreshed. If the session status iframe is enabled, the session status is also checked. Returns a promise that resolves with a boolean indicating whether or not the token has been refreshed. For example: try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); } clearToken() Clear authentication state, including tokens. This can be useful if application has detected the session was expired, for example if updating token fails. Invoking this results in onAuthLogout callback listener being invoked. 2.4.9.4. Callback Events The adapter supports setting callback listeners for certain events. Keep in mind that these have to be set before the call to the init() method. For example: keycloak.onAuthSuccess = () => console.log('Authenticated!'); The available events are: onReady(authenticated) - Called when the adapter is initialized. onAuthSuccess - Called when a user is successfully authenticated. onAuthError - Called if there was an error during authentication. onAuthRefreshSuccess - Called when the token is refreshed. onAuthRefreshError - Called if there was an error while trying to refresh the token. onAuthLogout - Called if the user is logged out (will only be called if the session status iframe is enabled, or in Cordova mode). onTokenExpired - Called when the access token is expired. If a refresh token is available the token can be refreshed with updateToken, or in cases where it is not (that is, with implicit flow) you can redirect to the login screen to obtain a new access token. 2.5. Red Hat build of Keycloak Node.js adapter Red Hat build of Keycloak provides a Node.js adapter built on top of Connect to protect server-side JavaScript apps - the goal was to be flexible enough to integrate with frameworks like Express.js . To use the Node.js adapter, first you must create a client for your application in the Red Hat build of Keycloak Admin Console. The adapter supports public, confidential, and bearer-only access type. Which one to choose depends on the use-case scenario. Once the client is created, click Action at the top right and choose Download adapter config . For Format, choose *Keycloak OIDC JSON and click Download . The downloaded keycloak.json file is at the root folder of your project. 2.5.1. Installation Assuming you have already installed Node.js , create a folder for your application: Use npm init command to create a package.json for your application. Now add the Red Hat build of Keycloak connect adapter in the dependencies list: "dependencies": { "keycloak-connect": "file:keycloak-connect-24.0.10.tgz" } 2.5.2. Usage Instantiate a Keycloak class The Keycloak class provides a central point for configuration and integration with your application. The simplest creation involves no arguments. In the root directory of your project create a file called server.js and add the following code: const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore }); Install the express-session dependency: To start the server.js script, add the following command in the 'scripts' section of the package.json : Now we have the ability to run our server with following command: By default, this will locate a file named keycloak.json alongside the main executable of your application, in our case on the root folder, to initialize Red Hat build of Keycloak specific settings such as public key, realm name, various URLs. In that case a Red Hat build of Keycloak deployment is necessary to access Red Hat build of Keycloak admin console. Please visit links on how to deploy a Red Hat build of Keycloak admin console with Podman or Docker Now we are ready to obtain the keycloak.json file by visiting the Red Hat build of Keycloak Admin Console clients (left sidebar) choose your client Installation Format Option Keycloak OIDC JSON Download Paste the downloaded file on the root folder of our project. Instantiation with this method results in all the reasonable defaults being used. As alternative, it's also possible to provide a configuration object, rather than the keycloak.json file: const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig); Applications can also redirect users to their preferred identity provider by using: const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig); Configuring a web session store If you want to use web sessions to manage server-side state for authentication, you need to initialize the Keycloak(... ) with at least a store parameter, passing in the actual session store that express-session is using. const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore }); Passing a custom scope value By default, the scope value openid is passed as a query parameter to Red Hat build of Keycloak's login URL, but you can add an additional custom value: const keycloak = new Keycloak({ scope: 'offline_access' }); 2.5.3. Installing middleware Once instantiated, install the middleware into your connect-capable app: In order to do so, first we have to install Express: then require Express in our project as outlined below: const express = require('express'); const app = express(); and configure Keycloak middleware in Express, by adding at the code below: app.use( keycloak.middleware() ); Last but not least, let's set up our server to listen for HTTP requests on port 3000 by adding the following code to main.js : app.listen(3000, function () { console.log('App listening on port 3000'); }); 2.5.4. Configuration for proxies If the application is running behind a proxy that terminates an SSL connection Express must be configured per the express behind proxies guide. Using an incorrect proxy configuration can result in invalid redirect URIs being generated. Example configuration: const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() ); 2.5.5. Protecting resources Simple authentication To enforce that a user must be authenticated before accessing a resource, simply use a no-argument version of keycloak.protect() : app.get( '/complain', keycloak.protect(), complaintHandler ); Role-based authorization To secure a resource with an application role for the current app: app.get( '/special', keycloak.protect('special'), specialHandler ); To secure a resource with an application role for a different app: app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler ); To secure a resource with a realm role: app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler ); Resource-Based Authorization Resource-Based Authorization allows you to protect resources, and their specific methods/actions, * * based on a set of policies defined in Keycloak, thus externalizing authorization from your application. This is achieved by exposing a keycloak.enforcer method which you can use to protect resources.* app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler); The keycloak-enforcer method operates in two modes, depending on the value of the response_mode configuration option. app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler); If response_mode is set to token , permissions are obtained from the server on behalf of the subject represented by the bearer token that was sent to your application. In this case, a new access token is issued by Keycloak with the permissions granted by the server. If the server did not respond with a token with the expected permissions, the request is denied. When using this mode, you should be able to obtain the token from the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile }); Prefer this mode when your application is using sessions and you want to cache decisions from the server, as well automatically handle refresh tokens. This mode is especially useful for applications acting as a client and resource server. If response_mode is set to permissions (default mode), the server only returns the list of granted permissions, without issuing a new access token. In addition to not issuing a new token, this method exposes the permissions granted by the server through the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile }); Regardless of the response_mode in use, the keycloak.enforcer method will first try to check the permissions within the bearer token that was sent to your application. If the bearer token already carries the expected permissions, there is no need to interact with the server to obtain a decision. This is specially useful when your clients are capable of obtaining access tokens from the server with the expected permissions before accessing a protected resource, so they can use some capabilities provided by Keycloak Authorization Services such as incremental authorization and avoid additional requests to the server when keycloak.enforcer is enforcing access to the resource. By default, the policy enforcer will use the client_id defined to the application (for instance, via keycloak.json ) to reference a client in Keycloak that supports Keycloak Authorization Services. In this case, the client can not be public given that it is actually a resource server. If your application is acting as both a public client(frontend) and resource server(backend), you can use the following configuration to reference a different client in Keycloak with the policies that you want to enforce: keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'}) It is recommended to use distinct clients in Keycloak to represent your frontend and backend. If the application you are protecting is enabled with Keycloak authorization services and you have defined client credentials in keycloak.json , you can push additional claims to the server and make them available to your policies in order to make decisions. For that, you can define a claims configuration option which expects a function that returns a JSON with the claims you want to push: app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { "http.uri": ["/protected/resource"], "user.agent": // get user agent from request } } }), function (req, res) { // access granted For more details about how to configure Keycloak to protected your application resources, please take a look at the Authorization Services Guide . Advanced authorization To secure resources based on parts of the URL itself, assuming a role exists for each section: function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler ); Advanced Login Configuration: By default, all unauthorized requests will be redirected to the Red Hat build of Keycloak login page unless your client is bearer-only. However, a confidential or public client may host both browsable and API endpoints. To prevent redirects on unauthenticated API requests and instead return an HTTP 401, you can override the redirectToLogin function. For example, this override checks if the URL contains /api/ and disables login redirects: Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\/api\//i; return !apiReqMatcher.test(req.originalUrl || req.url); }; 2.5.6. Additional URLs Explicit user-triggered logout By default, the middleware catches calls to /logout to send the user through a Red Hat build of Keycloak-centric logout workflow. This can be changed by specifying a logout configuration parameter to the middleware() call: app.use( keycloak.middleware( { logout: '/logoff' } )); When the user-triggered logout is invoked a query parameter redirect_url can be passed: This parameter is then used as the redirect url of the OIDC logout endpoint and the user will be redirected to https://example.com/logged/out . Red Hat build of Keycloak Admin Callbacks Also, the middleware supports callbacks from the Red Hat build of Keycloak console to log out a single session or all sessions. By default, these type of admin callbacks occur relative to the root URL of / but can be changed by providing an admin parameter to the middleware() call: app.use( keycloak.middleware( { admin: '/callbacks' } ); 2.5.7. Complete example A complete example using the Node.js adapter usage can be found in Keycloak quickstarts for Node.js 2.6. Financial-grade API (FAPI) Support Red Hat build of Keycloak makes it easier for administrators to make sure that their clients are compliant with these specifications: Financial-grade API Security Profile 1.0 - Part 1: Baseline Financial-grade API Security Profile 1.0 - Part 2: Advanced Financial-grade API: Client Initiated Backchannel Authentication Profile (FAPI CIBA) FAPI 2.0 Security Profile (Draft) FAPI 2.0 Message Signing (Draft) This compliance means that the Red Hat build of Keycloak server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat build of Keycloak adapters do not have any specific support for the FAPI, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.6.1. FAPI client profiles To make sure that your clients are FAPI compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for FAPI support, which are automatically available in each realm. You can use either fapi-1-baseline or fapi-1-advanced profile based on which FAPI profile you need your clients to conform with. You can use also profiles fapi-2-security-profile or fapi-2-message-signing for the compliance with FAPI 2 Draft specifications. In case you want to use Pushed Authorization Request (PAR) , it is recommended that your client use both the fapi-1-baseline profile and fapi-1-advanced for PAR requests. Specifically, the fapi-1-baseline profile contains pkce-enforcer executor, which makes sure that client use PKCE with secured S256 algorithm. This is not required for FAPI Advanced clients unless they use PAR requests. In case you want to use CIBA in a FAPI compliant way, make sure that your clients use both fapi-1-advanced and fapi-ciba client profiles. There is a need to use the fapi-1-advanced profile, or other client profile containing the requested executors, as the fapi-ciba profile contains just CIBA-specific executors. When enforcing the requirements of the FAPI CIBA specification, there is a need for more requirements, such as enforcement of confidential clients or certificate-bound access tokens. 2.6.2. Open Finance Brasil Financial-grade API Security Profile Red Hat build of Keycloak is compliant with the Open Finance Brasil Financial-grade API Security Profile 1.0 Implementers Draft 3 . This one is stricter in some requirements than the FAPI 1 Advanced specification and hence it may be needed to configure Client Policies in the more strict way to enforce some of the requirements. Especially: If your client does not use PAR, make sure that it uses encrypted OIDC request objects. This can be achieved by using a client profile with the secure-request-object executor configured with Encryption Required enabled. Make sure that for JWS, the client uses the PS256 algorithm. For JWE, the client should use the RSA-OAEP with A256GCM . This may need to be set in all the Client Settings where these algorithms are applicable. 2.6.3. Australia Consumer Data Right (CDR) Security Profile Red Hat build of Keycloak is compliant with the Australia Consumer Data Right Security Profile . If you want to apply the Australia CDR security profile, you need to use fapi-1-advanced profile because the Australia CDR security profile is based on FAPI 1.0 Advanced security profile. If your client also applies PAR, make sure that client applies RFC 7637 Proof Key for Code Exchange (PKCE) because the Australia CDR security profile requires that you apply PKCE when applying PAR. This can be achieved by using a client profile with the pkce-enforcer executor. 2.6.4. TLS considerations As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for the cipher suites and TLS protocol versions used. To match these requirements, you can consider configure allowed ciphers. This configuration can be done by setting the https-protocols and https-cipher-suites options. Red Hat build of Keycloak uses TLSv1.3 by default and hence it is possibly not needed to change the default settings. However it may be needed to adjust ciphers if you need to fall back to lower TLS version for some reason. For more details, see Configuring TLS chapter. 2.7. OAuth 2.1 Support Red Hat build of Keycloak makes it easier for administrators to make sure that their clients are compliant with these specifications: The OAuth 2.1 Authorization Framework - draft specification This compliance means that the Red Hat build of Keycloak server will verify the requirements for the authorization server, which are mentioned in the specifications. Red Hat build of Keycloak adapters do not have any specific support for the OAuth 2.1, hence the required validations on the client (application) side may need to be still done manually or through some other third-party solutions. 2.7.1. OAuth 2.1 client profiles To make sure that your clients are OAuth 2.1 compliant, you can configure Client Policies in your realm as described in the Server Administration Guide and link them to the global client profiles for OAuth 2.1 support, which are automatically available in each realm. You can use either oauth-2-1-for-confidential-client profile for confidential clients or oauth-2-1-for-public-client profile for public clients. Note OAuth 2.1 specification is still a draft and it may change in the future. Hence the Red Hat build of Keycloak built-in OAuth 2.1 client profiles can change as well. Note When using OAuth 2.1 profile for public clients, it is recommended to use DPoP preview feature as described in the Server Administration Guide because DPoP binds an access token and a refresh token together with the public part of a client's key pair. This binding prevents an attacker from using stolen tokens. 2.8. Recommendations This section describes some recommendations when securing your applications with Red Hat build of Keycloak. 2.8.1. Validating access tokens If you need to manually validate access tokens issued by Red Hat build of Keycloak, you can invoke the Introspection Endpoint . The downside to this approach is that you have to make a network invocation to the Red Hat build of Keycloak server. This can be slow and possibly overload the server if you have too many validation requests going on at the same time. Red Hat build of Keycloak issued access tokens are JSON Web Tokens (JWT) digitally signed and encoded using JSON Web Signature (JWS) . Because they are encoded in this way, you can locally validate access tokens using the public key of the issuing realm. You can either hard code the realm's public key in your validation code, or lookup and cache the public key using the certificate endpoint with the Key ID (KID) embedded within the JWS. Depending on what language you code in, many third party libraries exist and they can help you with JWS validation. 2.8.2. Redirect URIs When using the redirect based flows, be sure to use valid redirect uris for your clients. The redirect uris should be as specific as possible. This especially applies to client-side (public clients) applications. Failing to do so could result in: Open redirects - this can allow attackers to create spoof links that looks like they are coming from your domain Unauthorized entry - when users are already authenticated with Red Hat build of Keycloak, an attacker can use a public client where redirect uris have not be configured correctly to gain access by redirecting the user without the users knowledge In production for web applications always use https for all redirect URIs. Do not allow redirects to http. A few special redirect URIs also exist: http://127.0.0.1 This redirect URI is useful for native applications and allows the native application to create a web server on a random port that can be used to obtain the authorization code. This redirect uri allows any port. Note that per OAuth 2.0 for Native Apps , the use of localhost is not recommended and the IP literal 127.0.0.1 should be used instead. urn:ietf:wg:oauth:2.0:oob If you cannot start a web server in the client (or a browser is not available), you can use the special urn:ietf:wg:oauth:2.0:oob redirect uri. When this redirect uri is used, Red Hat build of Keycloak displays a page with the code in the title and in a box on the page. The application can either detect that the browser title has changed, or the user can copy and paste the code manually to the application. With this redirect uri, a user can use a different device to obtain a code to paste back to the application. | [
"/realms/{realm-name}/.well-known/openid-configuration",
"/realms/{realm-name}/protocol/openid-connect/auth",
"/realms/{realm-name}/protocol/openid-connect/token",
"/realms/{realm-name}/protocol/openid-connect/userinfo",
"/realms/{realm-name}/protocol/openid-connect/logout",
"/realms/{realm-name}/protocol/openid-connect/certs",
"/realms/{realm-name}/protocol/openid-connect/token/introspect",
"/realms/{realm-name}/clients-registrations/openid-connect",
"/realms/{realm-name}/protocol/openid-connect/revoke",
"/realms/{realm-name}/protocol/openid-connect/auth/device",
"/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth",
"curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"npm install keycloak-js",
"import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD{kc_base_path}', realm: 'myrealm', clientId: 'myapp' }); try { const authenticated = await keycloak.init(); console.log(`User is USD{authenticated ? 'authenticated' : 'not authenticated'}`); } catch (error) { console.error('Failed to initialize adapter:', error); }",
"keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });",
"<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>",
"keycloak.init({ onLoad: 'login-required' });",
"async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }",
"try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();",
"keycloak.init({ flow: 'implicit' })",
"keycloak.init({ flow: 'hybrid' });",
"keycloak.init({ adapter: 'cordova-native' });",
"<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />",
"import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, });",
"import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, });",
"new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost', realm: 'myrealm', clientId: 'myApp' });",
"try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }",
"try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }",
"keycloak.onAuthSuccess = () => console.log('Authenticated!');",
"mkdir myapp && cd myapp",
"\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-24.0.10.tgz\" }",
"const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });",
"npm install express-session",
"\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },",
"npm run start",
"const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);",
"const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);",
"const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });",
"const keycloak = new Keycloak({ scope: 'offline_access' });",
"npm install express",
"const express = require('express'); const app = express();",
"app.use( keycloak.middleware() );",
"app.listen(3000, function () { console.log('App listening on port 3000'); });",
"const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );",
"app.get( '/complain', keycloak.protect(), complaintHandler );",
"app.get( '/special', keycloak.protect('special'), specialHandler );",
"app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );",
"app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );",
"app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });",
"keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})",
"app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted",
"function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );",
"Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };",
"app.use( keycloak.middleware( { logout: '/logoff' } ));",
"https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout",
"app.use( keycloak.middleware( { admin: '/callbacks' } );"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/securing_applications_and_services_guide/oidc |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_workflow_guide/con-conscious-language-message |
3.3. Additional Resources | 3.3. Additional Resources For more information about security updates, ways of applying them, the Red Hat Customer Portal, and related topics, see the resources listed below. Installed Documentation yum (8) - The manual page for the Yum package manager provides information about the way Yum can be used to install, update, and remove packages on your systems. rpmkeys (8) - The manual page for the rpmkeys utility describes the way this program can be used to verify the authenticity of downloaded packages. Online Documentation Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 documents the use of the Yum and rpm commands that are used to install, update, and remove packages on Red Hat Enterprise Linux 7 systems. Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 documents the configuration of the SELinux mandatory access control mechanism. Red Hat Customer Portal Red Hat Customer Portal, Security - The Security section of the Customer Portal contains links to the most important resources, including the Red Hat CVE database, and contacts for Red Hat Product Security. Red Hat Security Blog - Articles about latest security-related issues from Red Hat security professionals. See Also Chapter 2, Security Tips for Installation describes how to configure your system securely from the beginning to make it easier to implement additional security settings later. Section 4.9.2, "Creating GPG Keys" describes how to create a set of personal GPG keys to authenticate your communications. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-keeping_your_system_up-to-date-additional_resources |
4.2. Prioritizing Network Traffic | 4.2. Prioritizing Network Traffic When running multiple network-related services on a single server system, it is important to define network priorities among these services. Defining the priorities ensures that packets originating from certain services have a higher priority than packets originating from other services. For example, such priorities are useful when a server system simultaneously functions as an NFS and Samba server. The NFS traffic has to be of high priority as users expect high throughput. The Samba traffic can be deprioritized to allow better performance of the NFS server. The net_prio controller can be used to set network priorities for processes in cgroups. These priorities are then translated into Type of Service (ToS) field bits and embedded into every packet. Follow the steps in Procedure 4.2, "Setting Network Priorities for File Sharing Services" to configure prioritization of two file sharing services (NFS and Samba). Procedure 4.2. Setting Network Priorities for File Sharing Services Attach the net_prio subsystem to the /cgroup/net_prio cgroup: Create two cgroups, one for each service: To automatically move the nfs services to the nfs_high cgroup, add the following line to the /etc/sysconfig/nfs file: This configuration ensures that nfs service processes are moved to the nfs_high cgroup when the nfs service is started or restarted. The smbd service does not have a configuration file in the /etc/sysconfig directory. To automatically move the smbd service to the samba_low cgroup, add the following line to the /etc/cgrules.conf file: Note that this rule moves every smbd service, not only /usr/sbin/smbd , into the samba_low cgroup. You can define rules for the nmbd and winbindd services to be moved to the samba_low cgroup in a similar way. Start the cgred service to load the configuration from the step: For the purposes of this example, let us assume both services use the eth1 network interface. Define network priorities for each cgroup, where 1 denotes low priority and 10 denotes high priority: Start the nfs and smb services and check whether their processes have been moved into the correct cgroups: Network traffic originating from NFS now has higher priority than traffic originating from Samba. Similar to Procedure 4.2, "Setting Network Priorities for File Sharing Services" , the net_prio subsystem can be used to set network priorities for client applications, for example, Firefox. | [
"~]# mkdir sys/fs/cgroup/net_prio ~]# mount -t cgroup -o net_prio none sys/fs/cgroup/net_prio",
"~]# mkdir sys/fs/cgroup/net_prio/nfs_high ~]# mkdir sys/fs/cgroup/net_prio/samba_low",
"CGROUP_DAEMON=\"net_prio:nfs_high\"",
"*:smbd net_prio samba_low",
"~]# systemctl start cgred Starting CGroup Rules Engine Daemon: [ OK ]",
"~]# echo \"eth1 1\" > /sys/fs/cgroup/net_prio/samba_low/net_prio.ifpriomap ~]# echo \"eth1 10\" > /sys/fs/cgroup/net_prio/nfs_high/net_prio.ifpriomap",
"~]# systemctl start smb Starting SMB services: [ OK ] ~]# cat /sys/fs/cgroup/net_prio/samba_low/tasks 16122 16124 ~]# systemctl start nfs Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Stopping RPC idmapd: [ OK ] Starting RPC idmapd: [ OK ] Starting NFS daemon: [ OK ] ~]# cat sys/fs/cgroup/net_prio/nfs_high/tasks 16321 16325 16376"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-prioritizing_network_traffic |
Chapter 5. Debugging Serverless applications | Chapter 5. Debugging Serverless applications You can use a variety of methods to troubleshoot a Serverless application. 5.1. Checking terminal output You can check your deploy command output to see whether deployment succeeded or not. If your deployment process was terminated, you should see an error message in the output that describes the reason why the deployment failed. This kind of failure is most likely due to either a misconfigured manifest or an invalid command. Procedure Open the command output on the client where you deploy and manage your application. The following example is an error that you might see after a failed oc apply command: Error from server (InternalError): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"serving.knative.dev/v1\",\"kind\":\"Route\",\"metadata\":{\"annotations\":{},\"name\":\"route-example\",\"namespace\":\"default\"},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}}\n"}},"spec":{"traffic":[{"configurationName":"configuration-example","percent":50}]}} to: &{0xc421d98240 0xc421e77490 default route-example STDIN 0xc421db0488 264682 false} for: "STDIN": Internal error occurred: admission webhook "webhook.knative.dev" denied the request: mutation failed: The route must have traffic percent sum equal to 100. ERROR: Non-zero return code '1' from command: Process exited with status 1 This output indicates that you must configure the route traffic percent to be equal to 100. 5.2. Checking pod status You might need to check the status of your Pod object to identify the issue with your Serverless application. Procedure List all pods for your deployment by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE configuration-example-00001-deployment-659747ff99-9bvr4 2/2 Running 0 3h configuration-example-00002-deployment-5f475b7849-gxcht 1/2 CrashLoopBackOff 2 36s In the output, you can see all pods with selected data about their status. View the detailed information on the status of a pod by running the following command: Example output USD oc get pod <pod_name> --output yaml In the output, the conditions and containerStatuses fields might be particularly useful for debugging. 5.3. Checking revision status You might need to check the status of your revision to identify the issue with your Serverless application. Procedure If you configure your route with a Configuration object, get the name of the Revision object created for your deployment by running the following command: USD oc get configuration <configuration_name> --output jsonpath="{.status.latestCreatedRevisionName}" You can find the configuration name in the Route.yaml file, which specifies routing settings by defining an OpenShift Route resource. If you configure your route with revision directly, look up the revision name in the Route.yaml file. Query for the status of the revision by running the following command: USD oc get revision <revision-name> --output yaml A ready revision should have the reason: ServiceReady , status: "True" , and type: Ready conditions in its status. If these conditions are present, you might want to check pod status or Istio routing. Otherwise, the resource status contains the error message. 5.3.1. Additional resources Route configuration 5.4. Checking Ingress status You might need to check the status of your Ingress to identify the issue with your Serverless application. Procedure Check the IP address of your Ingress by running the following command: USD oc get svc -n istio-system istio-ingressgateway The istio-ingressgateway service is the LoadBalancer service used by Knative. If there is no external IP address, run the following command: USD oc describe svc istio-ingressgateway -n istio-system This command prints the reason why IP addresses were not provisioned. Most likely, it is due to a quota issue. 5.5. Checking route status In some cases, the Route object has issues. You can check its status by using the OpenShift CLI ( oc ). Procedure View the status of the Route object with which you deployed your application by running the following command: USD oc get route <route_name> --output yaml Substitute <route_name> with the name of your Route object. The conditions object in the status object states the reason in case of a failure. 5.6. Checking Ingress and Istio routing Sometimes, when Istio is used as an Ingress layer, the Ingress and Istio routing have issues. You can see the details on them by using the OpenShift CLI ( oc ). Procedure List all Ingress resources and their corresponding labels by running the following command: USD oc get ingresses.networking.internal.knative.dev -o=custom-columns='NAME:.metadata.name,LABELS:.metadata.labels' Example output NAME LABELS helloworld-go map[serving.knative.dev/route:helloworld-go serving.knative.dev/routeNamespace:default serving.knative.dev/service:helloworld-go] In this output, labels serving.knative.dev/route and serving.knative.dev/routeNamespace indicate the Route where the Ingress resource resides. Your Route and Ingress should be listed. If your Ingress does not exist, the route controller assumes that the Revision objects targeted by your Route or Service object are not ready. Proceed with other debugging procedures to diagnose Revision readiness status. If your Ingress is listed, examine the ClusterIngress object created for your route by running the following command: USD oc get ingresses.networking.internal.knative.dev <ingress_name> --output yaml In the status section of the output, if the condition with type=Ready has the status of True , then Ingress is working correctly. Otherwise, the output contains error messages. If Ingress has the status of Ready , then there is a corresponding VirtualService object. Verify the configuration of the VirtualService object by running the following command: USD oc get virtualservice -l networking.internal.knative.dev/ingress=<ingress_name> -n <ingress_namespace> --output yaml The network configuration in the VirtualService object must match that of the Ingress and Route objects. Because the VirtualService object does not expose a Status field, you might need to wait for its settings to propagate. 5.6.1. Additional resources Maistra Service Mesh documentation | [
"Error from server (InternalError): error when applying patch: {\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"serving.knative.dev/v1\\\",\\\"kind\\\":\\\"Route\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"route-example\\\",\\\"namespace\\\":\\\"default\\\"},\\\"spec\\\":{\\\"traffic\\\":[{\\\"configurationName\\\":\\\"configuration-example\\\",\\\"percent\\\":50}]}}\\n\"}},\"spec\":{\"traffic\":[{\"configurationName\":\"configuration-example\",\"percent\":50}]}} to: &{0xc421d98240 0xc421e77490 default route-example STDIN 0xc421db0488 264682 false} for: \"STDIN\": Internal error occurred: admission webhook \"webhook.knative.dev\" denied the request: mutation failed: The route must have traffic percent sum equal to 100. ERROR: Non-zero return code '1' from command: Process exited with status 1",
"oc get pods",
"NAME READY STATUS RESTARTS AGE configuration-example-00001-deployment-659747ff99-9bvr4 2/2 Running 0 3h configuration-example-00002-deployment-5f475b7849-gxcht 1/2 CrashLoopBackOff 2 36s",
"oc get pod <pod_name> --output yaml",
"oc get configuration <configuration_name> --output jsonpath=\"{.status.latestCreatedRevisionName}\"",
"oc get revision <revision-name> --output yaml",
"oc get svc -n istio-system istio-ingressgateway",
"oc describe svc istio-ingressgateway -n istio-system",
"oc get route <route_name> --output yaml",
"oc get ingresses.networking.internal.knative.dev -o=custom-columns='NAME:.metadata.name,LABELS:.metadata.labels'",
"NAME LABELS helloworld-go map[serving.knative.dev/route:helloworld-go serving.knative.dev/routeNamespace:default serving.knative.dev/service:helloworld-go]",
"oc get ingresses.networking.internal.knative.dev <ingress_name> --output yaml",
"oc get virtualservice -l networking.internal.knative.dev/ingress=<ingress_name> -n <ingress_namespace> --output yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/serving/debugging-serverless-applications |
Chapter 8. Image Storage (glance) Parameters | Chapter 8. Image Storage (glance) Parameters Parameter Description CephClusterName The Ceph cluster name. The default value is ceph . GlanceApiOptVolumes List of optional volumes to be mounted. GlanceBackend The short name of the backend to use. Should be one of swift , rbd , or file . The default value is swift . GlanceBackendID The default backend's identifier. The default value is default_backend . GlanceCacheEnabled Enable OpenStack Image Storage (glance) Image Cache. The default value is False . GlanceEnabledImportMethods List of enabled Image Import Methods. Valid values in the list are glance-direct and web-download . The default value is web-download . GlanceIgnoreUserRoles List of user roles to be ignored for injecting image metadata properties. The default value is admin . GlanceImageCacheDir Base directory that the Image Cache uses. The default value is /var/lib/glance/image-cache . GlanceImageCacheMaxSize The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. The default value is 10737418240 . GlanceImageCacheStallTime The amount of time, in seconds, to let an image remain in the cache without being accessed. The default value is 86400 . GlanceImageConversionOutputFormat Desired output format for image conversion plugin. The default value is raw . GlanceImageImportPlugins List of enabled Image Import Plugins. Valid values in the list are image_conversion , inject_metadata , no_op . GlanceImageMemberQuota Maximum number of image members per image. Negative values evaluate to unlimited. The default value is 128 . GlanceInjectMetadataProperties Metadata properties to be injected in image. GlanceLogFile The filepath of the file to use for logging messages from OpenStack Image Storage (glance). GlanceMultistoreConfig Dictionary of settings when configuring additional glance backends. The hash key is the backend ID, and the value is a dictionary of parameter values unique to that backend. Multiple rbd backends are allowed, but cinder, file and swift backends are limited to one each. Example: # Default glance store is rbd. GlanceBackend: rbd GlanceStoreDescription: Default rbd store # GlanceMultistoreConfig specifies a second rbd backend, plus a cinder # backend. GlanceMultistoreConfig: rbd2_store: GlanceBackend: rbd GlanceStoreDescription: Second rbd store CephClusterName: ceph2 # Override CephClientUserName if this cluster uses a different # client name. CephClientUserName: client2 cinder_store: GlanceBackend: cinder GlanceStoreDescription: OpenStack Block Storage (cinder) store . GlanceNetappNfsEnabled When using GlanceBackend: file , Netapp mounts NFS share for image storage. The default value is False . GlanceNfsEnabled When using GlanceBackend: file , mount NFS share for image storage. The default value is False . GlanceNfsOptions NFS mount options for image storage when GlanceNfsEnabled is true. The default value is _netdev,bg,intr,context=system_u:object_r:svirt_sandbox_file_t:s0 . GlanceNfsShare NFS share to mount for image storage when GlanceNfsEnabled is true. GlanceNodeStagingUri URI that specifies the staging location to use when importing images. The default value is file:///var/lib/glance/staging . GlanceNotifierStrategy Strategy to use for OpenStack Image Storage (glance) notification queue. The default value is noop . GlancePassword The password for the image storage service and database account. GlanceShowMultipleLocations Whether to show multiple image locations e.g for copy-on-write support on RBD or Netapp backends. Potential security risk, see glance.conf for more information. The default value is False . GlanceStagingNfsOptions NFS mount options for NFS image import staging. The default value is _netdev,bg,intr,context=system_u:object_r:svirt_sandbox_file_t:s0 . GlanceStagingNfsShare NFS share to mount for image import staging. GlanceStoreDescription User facing description for the OpenStack Image Storage (glance) backend. The default value is Default glance store backend. . GlanceWorkers Set the number of workers for the image storage service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. MultipathdEnable Whether to enable the multipath daemon. The default value is False . NetappShareLocation Netapp share to mount for image storage (when GlanceNetappNfsEnabled is true). NotificationDriver Driver or drivers to handle sending notifications. The default value is messagingv2 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/image-storage-glance-parameters |
Chapter 150. KafkaMirrorMaker2Spec schema reference | Chapter 150. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. connectCluster string The cluster alias used for Kafka Connect. The value must match the alias of the target Kafka cluster as specified in the spec.clusters configuration. The target Kafka cluster is used by the underlying Kafka Connect framework for its internal topics. clusters KafkaMirrorMaker2ClusterSpec array Kafka clusters for mirroring. mirrors KafkaMirrorMaker2MirrorSpec array Configuration of the MirrorMaker 2 connectors. resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka MirrorMaker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration The externalConfiguration property has been deprecated. The external configuration is deprecated and will be removed in the future. Please use the template section instead to configure additional environment variables or volumes. Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkamirrormaker2spec-reference |
Chapter 43. Installation and Booting | Chapter 43. Installation and Booting Multi-threaded xz compression in rpm-build Compression can take long time for highly parallel builds as it currently uses only one core. This is problematic especially for continuous integration of large projects that are built on hardware with many cores. This feature, which is provided as a Technology Preview, enables multi-threaded xz compression for source and binary packages when setting the %_source_payload or %_binary_payload macros to the wLTX.xzdio pattern . In it, L represents the compression level, which is 6 by default, and X is the number of threads to be used (may be multiple digits), for example w6T12.xzdio . This can be done by editing the /usr/lib/rpm/macros file or by declaring the macro within the spec file or at the command line. (BZ#1278924) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/technology_previews_installation_and_booting |
Chapter 9. Ceph File System snapshot scheduling | Chapter 9. Ceph File System snapshot scheduling As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshots are created in. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. 9.1. Ceph File System snapshot schedules A Ceph File System (CephFS) can schedule snapshots of a file system directory. The scheduling of snapshots is managed by the Ceph Manager, and relies on Python Timers. The snapshot schedule data is stored as an object in the CephFS metadata pool, and at runtime, all the schedule data lives in a serialized SQLite database. Important The scheduler is precisely based on the specified time to keep snapshots apart when a storage cluster is under normal load. When the Ceph Manager is under a heavy load, it's possible that a snapshot might not get scheduled right away, resulting in a slightly delayed snapshot. If this happens, then the scheduled snapshot acts as if there was no delay. Scheduled snapshots that are delayed do not cause drift in the overall schedule. Usage Scheduling snapshots for a Ceph File System (CephFS) is managed by the snap_schedule Ceph Manager module. This module provides an interface to add, query, and delete snapshot schedules, and to manage the retention policies. This module also implements the ceph fs snap-schedule command, with several subcommands to manage schedules, and retention policies. All of the subcommands take the CephFS volume path and subvolume path arguments to specify the file system path when using multiple Ceph File Systems. Not specifying the CephFS volume path, the argument defaults to the first file system listed in the fs_map , and not specifying the subvolume path argument defaults to nothing. Snapshot schedules are identified by the file system path, the repeat interval, and the start time. The repeat interval defines the time between two subsequent snapshots. The interval format is a number plus a time designator: h (our), d (ay), or w (eek). For example, having an interval of 4h , means one snapshot every four hours. The start time is a string value in the ISO format, %Y-%m-%dT%H:%M:%S , and if not specified, the start time uses a default value of last midnight. For example, you schedule a snapshot at 14:45 , using the default start time value, with a repeat interval of 1h , the first snapshot will be taken at 15:00. Retention policies are identified by the file system path, and the retention policy specifications. Defining a retention policy consist of either a number plus a time designator or a concatenated pairs in the format of COUNT TIME_PERIOD . The policy ensures a number of snapshots are kept, and the snapshots are at least for a specified time period apart. The time period designators are: h (our), d (ay), w (eek), m (onth), y (ear), and n . The n time period designator is a special modifier, which means keep the last number of snapshots regardless of timing. For example, 4d means keeping four snapshots that are at least one day, or longer apart from each other. Additional Resources See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Creating a snapshot schedule for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. 9.2. Adding a snapshot schedule for a Ceph File System Add a snapshot schedule for a CephFS path that does not exist yet. You can create one or more schedules for a single path. Schedules are considered different, if their repeat interval and start times are different. A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs. Note Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. Root-level access to a Ceph Manager and Metadata Server (MDS) nodes. Enable CephFS snapshots on the file system. Procedure Log into the Cephadm shell on a Ceph Manager node: Example Enable the snap_schedule module: Example Log into the client node: Example Add a new schedule for a Ceph File System: Syntax Example Note START_TIME is represented in ISO 8601 format. This example creates a snapshot schedule for the path /cephfs within the filesystem mycephfs , snapshotting every hour, and starts on 27 June 2022 9:50 PM. Add a new retention policy for snapshots of a CephFS volume path: Syntax Example 1 This example keeps 14 snapshots at least one hour apart. 2 This example keeps 4 snapshots at least one day apart. 3 This example keeps 14 hourly, and 4 weekly snapshots. List the snapshot schedules to verify the new schedule is created. Syntax Example This example lists all schedules in the directory tree. Check the status of a snapshot schedule: Syntax Example This example displays the status of the snapshot schedule for the CephFS /cephfs path in JSON format. The default format is plain text, if not specified. Additional Resources See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System Guide for more details. See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for more details. 9.3. Adding a snapshot schedule for Ceph File System subvolume To manage the retention policies for Ceph File System (CephFS) subvolume snapshots, you can have different schedules for a single path. Schedules are considered different, if their repeat interval and start times are different. Add a snapshot schedule for a CephFS file path that does not exist yet. A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs. Note Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. You can create a snapshot schedule for: A directory in a subvolume. A subvolume in default group. A subvolume in non-default group. However, the commands are different. Procedure To create a snapshot schedule for a directory in the subvolume: Get the absolute path of the subvolume where directory exists: Syntax Example Add a snapshot schedule to a directory in the subvolume: Syntax Note Path in snap-schedule command would be <absolute_path_of_ subvolume>/<relative_path_of_test_dir>, refer step1 for absolute_path of subvolume. Example Note START_TIME is represented in ISO 8601 format. This example creates a snapshot schedule for the subvolume path, snapshotting every hour, and starts on 27 June 2022 9:50 PM. To create a snapshot schedule for a subvolume in a default group, run the following command: Syntax Example Note The path must be defined and cannot be left empty.There is no dependency on path string value, you can define it as / , -` or /.. . To create a snapshot schedule for a subvolume in a non-default group, run the following command: Syntax Example Note The path must be defined and cannot be left empty.There is no dependency on path string value, you can define it as / , -` or /.. . 9.3.1. Adding retention policy for snapshot schedules of a CephFS volume path To define how many snapshots to retain in the volume path at any time, you must add a retention policy after you have created a snapshot schedule. You can create retention policies for directories within a subvolume group, subvolume within the default group and to the subvolume withing a non-default group. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure Add a new retention policy for snapshot schedules in a directory of a CephFS subvolume: Syntax Example 1 This example keeps 14 snapshots at least one hour apart. 2 This example keeps 4 snapshots at least one day apart. 3 This example keeps 14 hourly, and 4 weekly snapshots. Add a retention policy to a snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Add a retention policy to the snapshot schedule created for a subvolume group in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.3.2. Listing CephFS snapshot schedules By listing and adhering to snapshot schedules, you can ensure robust data protection and efficient management. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure List the snapshot schedules: Syntax Example This example lists all schedules in the directory tree. 9.3.3. Checking status of CephFS snapshot schedules You can check the status of snapshot schedules using the command in the procedure for snapshots created in a directory of a subvolume, for a subvolume in the default subvolume group and for a subvolume created in a non-default group. Prerequisites A running and healthy IBM Storage Ceph cluster with Ceph File System (CephFS) deployed. A minimum of read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume and subvolume group created. A snapshot schedule. Procedure Check the status of a snapshot schedule created for a directory in the subvolume: Syntax Example This example displays the status of the snapshot schedule for the /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path in JSON format. The default format is plain text, if not specified. Check the status of the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Check the status of the snapshot schedule created for a subvolume in the non-default group: .Syntax Example + IMPORTANT: The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.4. Activating snapshot schedule for a Ceph File System This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Activate the snapshot schedule: Syntax Example This example activates all schedules for the CephFS /cephfs path. 9.5. Activating snapshot schedule for a Ceph File System sub volume This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Activate the snapshot schedule created for a directory in a subvolume: Syntax Example This example activates all schedules for the CephFS /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path. Activate the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Activate the snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.6. Deactivating snapshot schedule for a Ceph File System This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS). This action will exclude the snapshot from scheduling until it is activated again. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created and is in active state. Procedure Deactivate a snapshot schedule for a CephFS path: Syntax Example This example deactivates the daily snapshots for the /cephfs path, thereby pausing any further snapshot creation. 9.7. Deactivating snapshot schedule for a Ceph File System sub volume This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS) sub volume. This action excludes the snapshot from scheduling until it is activated again. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created and is in active state. Procedure Deactivate a snapshot schedule for a directory in a CephFS subvolume: Syntax Example This example deactivates the daily snapshots for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path, thereby pausing any further snapshot creation. Deactivate the snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Deactivate the snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.8. Removing a snapshot schedule for a Ceph File System This section provides the step to remove snapshot schedule of a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created. Procedure Remove a specific snapshot schedule: Syntax Example This example removes the specific snapshot schedule for the /cephfs volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM. Remove all snapshot schedules for a specific CephFS volume path: Syntax Example This example removes all the snapshot schedules for the /cephfs volume path. 9.9. Removing a snapshot schedule for a Ceph File System sub volume This section provides the step to remove snapshot schedule of a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule is created. Procedure Remove a specific snapshot schedule created for a directory in a CephFS subvolume: Syntax Example This example removes the specific snapshot schedule for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM. Remove a specific snapshot schedule created for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Remove a specific snapshot schedule created for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... 9.10. Removing snapshot schedule retention policy for a Ceph File System This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS). Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule created for a CephFS volume path. Procedure Remove a retention policy on a CephFS path: Syntax Example 1 This example removes 4 hourly snapshots. 2 This example removes 14 daily, and 4 weekly snapshots. 9.11. Removing snapshot schedule retention policy for a Ceph File System sub volume This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS) sub volume. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Snapshot schedule created for a CephFS sub volume path. Procedure Remove a retention policy for a directory in a CephFS subvolume: Syntax Example 1 This example removes 4 hourly snapshots. 2 This example removes 14 daily, and 4 weekly snapshots. Remove a retention policy created on snapshot schedule for a subvolume in the default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... Remove a retention policy created on snapshot schedule for a subvolume in the non-default group: Syntax Example Important The path ( / ) must be defined and cannot be left empty. There is no dependency on path string value, you can define it as /, - or /... See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . | [
"cephadm shell",
"ceph mgr module enable snap_schedule",
"cephadm shell",
"ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME",
"ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs mycephfs",
"ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention add /cephfs h 14 1 ceph fs snap-schedule retention add /cephfs d 4 2 ceph fs snap-schedule retention add /cephfs 14h4w 3",
"ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]",
"ceph fs snap-schedule list /cephfs --recursive=true",
"ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]",
"ceph fs snap-schedule status /cephfs --format=json",
"ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME",
"ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1",
"ceph fs snap-schedule add SUBVOLUME_DIR_PATH SNAP_SCHEDULE [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1 Schedule set for path /..",
"ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME",
"ceph fs snap-schedule add - 2M --subvol sv_non_def_1",
"ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME",
"ceph fs snap-schedule add - 2M --fs cephfs --subvol sv_non_def_1 --group svg1",
"ceph fs snap-schedule retention add SUBVOLUME_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3 Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..",
"ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched Retention added to path /volumes/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME",
"ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention added to path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a54j0dda7f16/..",
"ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]",
"ceph fs snap-schedule list / --recursive=true /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h",
"ceph fs snap-schedule status SUBVOLUME_DIR_PATH [--format=plain|json]",
"ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json {\"fs\": \"cephfs\", \"subvol\": \"subvol_1\", \"path\": \"/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..\", \"rel_path\": \"/..\", \"schedule\": \"4h\", \"retention\": {\"h\": 14}, \"start\": \"2022-05-16T14:00:00\", \"created\": \"2023-03-20T08:47:18\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule status --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule status --fs cephfs --subvol sv_sched {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule status --fs _CEPH_FILE_SYSTEM_NAME_ --subvol _SUBVOLUME_NAME_ --group _NON-DEFAULT_SUBVOLGROUP_NAME_",
"ceph fs snap-schedule status --fs cephfs --subvol sv_sched --group subvolgroup_cg {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e564329a-kj87-4763-gh0y-b56c8sev7t23/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}",
"ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule activate /cephfs",
"ceph fs snap-schedule activate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..",
"ceph fs snap-schedule activate /.. REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule activate /.. [ REPEAT_INTERVAL ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule deactivate /cephfs 1d",
"ceph fs snap-schedule deactivate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]",
"ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d",
"ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ] [ START_TIME ]",
"ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00",
"ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH",
"ceph fs snap-schedule remove /cephfs",
"ceph fs snap-schedule remove SUBVOL_DIR_PATH [ REPEAT_INTERVAL ] [ START_TIME ]",
"ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00",
"ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention remove /cephfs h 4 1 ceph fs snap-schedule retention remove /cephfs 14d4w 2",
"ceph fs snap-schedule retention remove SUBVOL_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT",
"ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1 ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2",
"ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME",
"ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..",
"ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME",
"ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/.."
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/ceph-file-system-snapshot-scheduling |
10.4. Payment Card Industry Data Security Standard (PCI DSS) | 10.4. Payment Card Industry Data Security Standard (PCI DSS) From https://www.pcisecuritystandards.org/about/index.shtml : The PCI Security Standards Council is an open global forum, launched in 2006, that is responsible for the development, management, education, and awareness of the PCI Security Standards, including the Data Security Standard (DSS). You can download the PCI DSS standard from https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-federal_standards_and_regulations-payment_card_industry_data_security_standard |
12.4. Configuration Examples | 12.4. Configuration Examples 12.4.1. SpamAssassin and Postfix SpamAssasin is an open-source mail filter that provides a way to filter unsolicited email (spam messages) from incoming email. [12] When using Red Hat Enterprise Linux, the spamassassin package provides SpamAssassin. Run the rpm -q spamassassin command to see if the spamassassin package is installed. If it is not installed, run the following command as the root user to install it: SpamAssassin operates in tandom with a mailer such as Postfix to provide spam-filtering capabilities. In order for SpamAssassin to effectively intercept, analyze and filter mail, it must listen on a network interface. The default port for SpamAssassin is TCP/783, however this can be changed. The following example provides a real-world demonstration of how SELinux complements SpamAssassin by only allowing it access to a certain port by default. This example will then demonstrate how to change the port and have SpamAssassin operate on a non-default port. Note that this is an example only and demonstrates how SELinux can affect a simple configuration of SpamAssassin. Comprehensive documentation of SpamAssassin is beyond the scope of this document. Refer to the official SpamAssassin documentation for further details. This example assumes the spamassassin is installed, that any firewall has been configured to allow access on the ports in use, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode: Procedure 12.1. Running SpamAssassin on a non-default port Run the semanage command to show the port that SELinux allows spamd to listen on by default: This output shows that TCP/783 is defined in spamd_port_t as the port for SpamAssassin to operate on. Edit the /etc/sysconfig/spamassassin configuration file and modify it so that it will start SpamAssassin on the example port TCP/10000: This line now specifies that SpamAssassin will operate on port 10000. The rest of this example will show how to modify SELinux policy to allow this socket to be opened. Start SpamAssassin and an error message similar to the following will appear: This output means that SELinux has blocked access to this port. A denial similar to the following will be logged by SELinux: As the root user, run semanage to modify SELinux policy in order to allow SpamAssassin to operate on the example port (TCP/10000): Confirm that SpamAssassin will now start and is operating on TCP port 10000: At this point, spamd is properly operating on TCP port 10000 as it has been allowed access to that port by SELinux policy. [12] Refer to the SpamAssassin project page for more information. | [
"~]# yum install spamassassin",
"~]# semanage port -l | grep spamd spamd_port_t tcp 783",
"Options to spamd SPAMDOPTIONS=\"-d -p 10000 -c m5 -H\"",
"~]# service spamassassin start Starting spamd: [2203] warn: server socket setup failed, retry 1: spamd: could not create INET socket on 127.0.0.1:10000: Permission denied [2203] warn: server socket setup failed, retry 2: spamd: could not create INET socket on 127.0.0.1:10000: Permission denied [2203] error: spamd: could not create INET socket on 127.0.0.1:10000: Permission denied spamd: could not create INET socket on 127.0.0.1:10000: Permission denied [FAILED]",
"SELinux is preventing the spamd (spamd_t) from binding to port 10000.",
"~]# semanage port -a -t spamd_port_t -p tcp 10000",
"~]# service spamassassin start Starting spamd: [ OK ] ~]# netstat -lnp | grep 10000 tcp 0 0 127.0.0.1:10000 0.0.0.0:* LISTEN 2224/spamd.pid"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-postfix-configuration_examples |
Chapter 6. Using the CLI tools | Chapter 6. Using the CLI tools The two primary CLI tools used for managing resources in the cluster are: The OpenShift Virtualization virtctl client The OpenShift Container Platform oc client 6.1. Prerequisites You must install the virtctl client . 6.2. Virtctl client commands The virtctl client is a command-line utility for managing OpenShift Virtualization resources. The following table contains the virtctl commands used throughout the OpenShift Virtualization documentation. To view a list of options that you can use with a command, run it with the -h or --help flag. For example: USD virtctl image-upload -h Table 6.1. virtctl client commands Command Description virtctl start <vm_name> Start a virtual machine. virtctl stop <vm_name> Stop a virtual machine. virtctl pause vm|vmi <object_name> Pause a virtual machine or virtual machine instance. The machine state is kept in memory. virtctl unpause vm|vmi <object_name> Unpause a virtual machine or virtual machine instance. virtctl migrate <vm_name> Migrate a virtual machine. virtctl restart <vm_name> Restart a virtual machine. virtctl expose <vm_name> Create a service that forwards a designated port of a virtual machine or virtual machine instance and expose the service on the specified port of the node. virtctl console <vmi_name> Connect to a serial console of a virtual machine instance. virtctl vnc <vmi_name> Open a VNC connection to a virtual machine instance. virtctl image-upload dv <datavolume_name> --image-path=</path/to/image> --no-create Upload a virtual machine image to a data volume that already exists. virtctl image-upload dv <datavolume_name> --size=<datavolume_size> --image-path=</path/to/image> Upload a virtual machine image to a new data volume. virtctl version Display the client and server version information. virtctl help Display a descriptive list of virtctl commands. virtctl fslist <vmi_name> Return a full list of file systems available on the guest machine. virtctl guestosinfo <vmi_name> Return guest agent information about the operating system. virtctl userlist <vmi_name> Return a full list of logged-in users on the guest machine. 6.3. OpenShift Container Platform client commands The OpenShift Container Platform oc client is a command-line utility for managing OpenShift Container Platform resources, including the VirtualMachine ( vm ) and VirtualMachineInstance ( vmi ) object types. Note You can use the -n <namespace> flag to specify a different project. Table 6.2. oc commands Command Description oc login -u <user_name> Log in to the OpenShift Container Platform cluster as <user_name> . oc get <object_type> Display a list of objects for the specified object type in the current project. oc describe <object_type> <resource_name> Display details of the specific resource in the current project. oc create -f <object_config> Create a resource in the current project from a file name or from stdin. oc edit <object_type> <resource_name> Edit a resource in the current project. oc delete <object_type> <resource_name> Delete a resource in the current project. For more comprehensive information on oc client commands, see the OpenShift Container Platform CLI tools documentation. | [
"virtctl image-upload -h"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/openshift_virtualization/virt-using-the-cli-tools |
7.64. gdm | 7.64. gdm 7.64.1. RHBA-2013:0381 - gdm bug fix and enhancement update Updated gdm packages that fix four bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The gdm packages provide the GNOME Display Manager (GDM), which implements the graphical login screen, shown shortly after boot up, log out, and when user-switching. Bug Fixes BZ#616755 Previously, the gdm_smartcard_extension_is_visible() function returned "TRUE" instead of the "ret" variable. Consequently, the smartcard login could not be disabled in the system-config-authentication window if the pcsd package was installed. With this update, gdm_smartcard_extension_is_visible() has been modified to return the correct value. As a result, the described error no longer occurs. BZ#704245 When GDM was used to connect to a host via XDMCP (X Display Manager Control Protocol), another connection to a remote system using the "ssh -X" command resulted in failed authentication with the X server. Consequently, applications such as xterm could not be displayed on a remote system. This update provides a compatible MIT-MAGIC-COOKIE-1 key in the described scenario, thus fixing this incompatibility. BZ#738462 Previously, X server audit messages were not included by default in the X server log. Now, those messages are unconditionally included in the log. Also, with this update, verbose messages are added to the X server log if debugging is enabled in the /etc/gdm/custom.conf file by setting "Enable=true" in the "debug" section. BZ# 820058 Previously, after booting the system, the following message occurred in the /var/log/gdm/:0-greeter.log file: With this update, this warning is no longer displayed. Enhancements BZ#719647 With this update, GDM has been modified to allow smartcard authentication when the visible user list is disabled. BZ#834303 Previously, the GDM debugging logs were stored in the /var/log/messages file. With this update, a separate /var/log/gdm/daemon.log file has been established for these debugging logs. All users of gdm are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"gdm-simple-greeter[PID]: Gtk-WARNING: gtkwidget.c:5460: widget not within a GtkWindow"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gdm |
Chapter 30. Profiling memory allocation with numastat | Chapter 30. Profiling memory allocation with numastat With the numastat tool, you can display statistics over memory allocations in a system. The numastat tool displays data for each NUMA node separately. You can use this information to investigate memory performance of your system or the effectiveness of different memory policies on your system. 30.1. Default numastat statistics By default, the numastat tool displays statistics over these categories of data for each NUMA node: numa_hit The number of pages that were successfully allocated to this node. numa_miss The number of pages that were allocated on this node because of low memory on the intended node. Each numa_miss event has a corresponding numa_foreign event on another node. numa_foreign The number of pages initially intended for this node that were allocated to another node instead. Each numa_foreign event has a corresponding numa_miss event on another node. interleave_hit The number of interleave policy pages successfully allocated to this node. local_node The number of pages successfully allocated on this node by a process on this node. other_node The number of pages allocated on this node by a process on another node. Note High numa_hit values and low numa_miss values (relative to each other) indicate optimal performance. 30.2. Viewing memory allocation with numastat You can view the memory allocation of the system by using the numastat tool. Prerequisites Install the numactl package: Procedure View the memory allocation of your system: Additional resources numastat(8) man page on your system | [
"yum install numactl",
"numastat node0 node1 numa_hit 76557759 92126519 numa_miss 30772308 30827638 numa_foreign 30827638 30772308 interleave_hit 106507 103832 local_node 76502227 92086995 other_node 30827840 30867162"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/profiling-memory-allocation-with-numastat_monitoring-and-managing-system-status-and-performance |
Chapter 105. Protobuf Jackson | Chapter 105. Protobuf Jackson Jackson Protobuf is a Data Format which uses the Jackson library with the Protobuf extension to unmarshal a Protobuf payload into Java objects or to marshal Java objects into a Protobuf payload. Note If you are familiar with Jackson, this Protobuf data format behaves in the same way as its JSON counterpart, and thus can be used with classes annotated for JSON serialization/deserialization. from("kafka:topic"). unmarshal().protobuf(ProtobufLibrary.Jackson, JsonNode.class). to("log:info"); 105.1. Dependencies When using protobuf-jackson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-protobuf-starter</artifactId> </dependency> 105.2. Configuring the SchemaResolver Since Protobuf serialization is schema-based, this data format requires that you provide a SchemaResolver object that is able to lookup the schema for each exchange that is going to be marshalled/unmarshalled. You can add a single SchemaResolver to the registry and it will be looked up automatically. Or you can explicitly specify the reference to a custom SchemaResolver. 105.3. Protobuf Jackson Options The Protobuf Jackson dataformat supports 18 options, which are listed below. Name Default Java Type Description contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. 105.4. Using custom ProtobufMapper You can configure JacksonProtobufDataFormat to use a custom ProtobufMapper in case you need more control of the mapping configuration. If you setup a single ProtobufMapper in the registry, then Camel will automatic lookup and use this ProtobufMapper . 105.5. Spring Boot Auto-Configuration The component supports 19 options, which are listed below. Name Description Default Type camel.dataformat.protobuf-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.protobuf-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.protobuf-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.protobuf-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.protobuf-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.protobuf-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.protobuf-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.protobuf-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.protobuf-jackson.enabled Whether to enable auto configuration of the protobuf-jackson data format. This is enabled by default. Boolean camel.dataformat.protobuf-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.protobuf-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.protobuf-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.protobuf-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.protobuf-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.protobuf-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.protobuf-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. String camel.dataformat.protobuf-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.protobuf-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.protobuf-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean | [
"from(\"kafka:topic\"). unmarshal().protobuf(ProtobufLibrary.Jackson, JsonNode.class). to(\"log:info\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-protobuf-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-protobuf-jackson-dataformat-starter |
Chapter 9. Red Hat Enterprise Linux CoreOS (RHCOS) | Chapter 9. Red Hat Enterprise Linux CoreOS (RHCOS) 9.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.15 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.15: If you install your cluster on infrastructure that the installation program provisions, RHCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the RHCOS configuration, are also downloaded and used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 9.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. CRI-O can use either the runC or crun container runtime to start and manage containers. For information about how to enable crun, see the documentation for creating a ContainerRuntimeConfig CR. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd . Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 9.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer preconfigured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 9.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line identify the type of deployment and the location of the Ignition-enabled initial RAM disk (initramfs). 9.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially configured machine. If a machine setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. 9.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is ready to join the cluster and does not require a reboot. 9.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 9.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster. | [
"openshift-install create ignition-configs --dir USDHOME/testconfig",
"cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },",
"echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode",
"This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service",
"\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",",
"USD oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m",
"oc describe machineconfigs 01-worker-container-runtime | grep Path:",
"Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/architecture-rhcos |
probe::udp.sendmsg | probe::udp.sendmsg Name probe::udp.sendmsg - Fires whenever a process sends a UDP message Synopsis Values name The name of this probe size Number of bytes sent by the process sock Network socket used by the process Context The process which sent a UDP message | [
"udp.sendmsg"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-udp-sendmsg |
Chapter 4. InstallPlan [operators.coreos.com/v1alpha1] | Chapter 4. InstallPlan [operators.coreos.com/v1alpha1] Description InstallPlan defines the installation of a set of operators. Type object Required metadata spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object InstallPlanSpec defines a set of Application resources to be installed status object InstallPlanStatus represents the information about the status of steps required to complete installation. Status may trail the actual state of a system. 4.1.1. .spec Description InstallPlanSpec defines a set of Application resources to be installed Type object Required approval approved clusterServiceVersionNames Property Type Description approval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". approved boolean clusterServiceVersionNames array (string) generation integer source string sourceNamespace string 4.1.2. .status Description InstallPlanStatus represents the information about the status of steps required to complete installation. Status may trail the actual state of a system. Type object Required catalogSources phase Property Type Description attenuatedServiceAccountRef object AttenuatedServiceAccountRef references the service account that is used to do scoped operator install. bundleLookups array BundleLookups is the set of in-progress requests to pull and unpackage bundle content to the cluster. bundleLookups[] object BundleLookup is a request to pull and unpackage the content of a bundle to the cluster. catalogSources array (string) conditions array conditions[] object InstallPlanCondition represents the overall status of the execution of an InstallPlan. message string Message is a human-readable message containing detailed information that may be important to understanding why the plan has its current status. phase string InstallPlanPhase is the current status of a InstallPlan as a whole. plan array plan[] object Step represents the status of an individual step in an InstallPlan. startTime string StartTime is the time when the controller began applying the resources listed in the plan to the cluster. 4.1.3. .status.attenuatedServiceAccountRef Description AttenuatedServiceAccountRef references the service account that is used to do scoped operator install. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 4.1.4. .status.bundleLookups Description BundleLookups is the set of in-progress requests to pull and unpackage bundle content to the cluster. Type array 4.1.5. .status.bundleLookups[] Description BundleLookup is a request to pull and unpackage the content of a bundle to the cluster. Type object Required catalogSourceRef identifier path replaces Property Type Description catalogSourceRef object CatalogSourceRef is a reference to the CatalogSource the bundle path was resolved from. conditions array Conditions represents the overall state of a BundleLookup. conditions[] object identifier string Identifier is the catalog-unique name of the operator (the name of the CSV for bundles that contain CSVs) path string Path refers to the location of a bundle to pull. It's typically an image reference. properties string The effective properties of the unpacked bundle. replaces string Replaces is the name of the bundle to replace with the one found at Path. 4.1.6. .status.bundleLookups[].catalogSourceRef Description CatalogSourceRef is a reference to the CatalogSource the bundle path was resolved from. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 4.1.7. .status.bundleLookups[].conditions Description Conditions represents the overall state of a BundleLookup. Type array 4.1.8. .status.bundleLookups[].conditions[] Description Type object Required status type Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. lastUpdateTime string Last time the condition was probed. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of condition. 4.1.9. .status.conditions Description Type array 4.1.10. .status.conditions[] Description InstallPlanCondition represents the overall status of the execution of an InstallPlan. Type object Property Type Description lastTransitionTime string lastUpdateTime string message string reason string ConditionReason is a camelcased reason for the state transition. status string type string InstallPlanConditionType describes the state of an InstallPlan at a certain point as a whole. 4.1.11. .status.plan Description Type array 4.1.12. .status.plan[] Description Step represents the status of an individual step in an InstallPlan. Type object Required resolving resource status Property Type Description optional boolean resolving string resource object StepResource represents the status of a resource to be tracked by an InstallPlan. status string StepStatus is the current status of a particular resource an in InstallPlan 4.1.13. .status.plan[].resource Description StepResource represents the status of a resource to be tracked by an InstallPlan. Type object Required group kind name sourceName sourceNamespace version Property Type Description group string kind string manifest string name string sourceName string sourceNamespace string version string 4.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/installplans GET : list objects of kind InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans DELETE : delete collection of InstallPlan GET : list objects of kind InstallPlan POST : create an InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name} DELETE : delete an InstallPlan GET : read the specified InstallPlan PATCH : partially update the specified InstallPlan PUT : replace the specified InstallPlan /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name}/status GET : read status of the specified InstallPlan PATCH : partially update status of the specified InstallPlan PUT : replace status of the specified InstallPlan 4.2.1. /apis/operators.coreos.com/v1alpha1/installplans HTTP method GET Description list objects of kind InstallPlan Table 4.1. HTTP responses HTTP code Reponse body 200 - OK InstallPlanList schema 401 - Unauthorized Empty 4.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans HTTP method DELETE Description delete collection of InstallPlan Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InstallPlan Table 4.3. HTTP responses HTTP code Reponse body 200 - OK InstallPlanList schema 401 - Unauthorized Empty HTTP method POST Description create an InstallPlan Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body InstallPlan schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 202 - Accepted InstallPlan schema 401 - Unauthorized Empty 4.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the InstallPlan HTTP method DELETE Description delete an InstallPlan Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InstallPlan Table 4.10. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InstallPlan Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InstallPlan Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body InstallPlan schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 401 - Unauthorized Empty 4.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/installplans/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the InstallPlan HTTP method GET Description read status of the specified InstallPlan Table 4.17. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InstallPlan Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InstallPlan Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body InstallPlan schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK InstallPlan schema 201 - Created InstallPlan schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/installplan-operators-coreos-com-v1alpha1 |
Replacing nodes | Replacing nodes Red Hat OpenShift Data Foundation 4.13 Instructions for how to safely replace a node in an OpenShift Data Foundation cluster. Red Hat Storage Documentation Team Abstract This document explains how to safely replace a node in a Red Hat OpenShift Data Foundation cluster. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_nodes/index |
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations | Chapter 3. Installing a user-provisioned bare metal cluster with network customizations In OpenShift Container Platform 4.17, you can install a cluster on bare metal infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. When you customize OpenShift Container Platform networking, you must set most of the network configuration parameters during installation. You can modify only kubeProxy network configuration parameters in a running cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 3.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 3.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Validating DNS resolution for user-provisioned infrastructure 3.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.4. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 3.4.1. Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. Additional resources Verifying node health 3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 3.10.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 3.11. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.12. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 3.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 3.14. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 3.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 3.15.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.15.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.15.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 3.15.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 3.15.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 3.15.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 3.15.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 3.15.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 3.15.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.17 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 3.15.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 3.15.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 3.15.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 3.15.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 3.15.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.15.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 3.15.3.7.4. Customizing a live install ISO image for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 3.15.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 3.15.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.15.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 3.15.3.8.4. Customizing a live install PXE environment for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.15.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 3.15.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 3.19. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 3.15.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 3.20. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 3.15.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 3.15.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.17.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 3.15.5. Installing RHCOS manually on an iSCSI boot device You can manually install RHCOS on an iSCSI target. Prerequisites You are in the RHCOS live environment. You have an iSCSI target that you want to install RHCOS on. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \ 2 --append.karg netroot=<target_iqn> \ 3 --console ttyS0,115200n8 --ignition-file <path_to_file> 1 The location you are installing to. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 3 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk with the following command: USD iscsiadm --mode node --logoutall=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 3.15.6. Installing RHCOS on an iSCSI boot device using iBFT On a completely diskless machine, the iSCSI target and initiator values can be passed through iBFT. iSCSI multipathing is also supported. Prerequisites You are in the RHCOS live environment. You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Optional: enable multipathing and start the daemon with the following command: USD mpathconf --enable && systemctl start multipathd.service Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/mapper/mpatha \ 1 --append-karg rd.iscsi.firmware=1 \ 2 --append-karg rd.multipath=default \ 3 --console ttyS0 \ --ignition-file <path_to_file> 1 The path of a single multipathed device. If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 2 The iSCSI parameter is read from the BIOS firmware. 3 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk: USD iscsiadm --mode node --logout=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 3.16. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 3.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.18. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.19. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 3.19.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.19.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.20. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 3.21. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.22. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.17.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>",
"iscsiadm --mode node --logoutall=all",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>",
"iscsiadm --mode node --logout=all",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_bare_metal/installing-bare-metal-network-customizations |
6.14. Miscellaneous Cluster Configuration | 6.14. Miscellaneous Cluster Configuration This section describes using the ccs command to configure the following: Section 6.14.1, "Cluster Configuration Version" Section 6.14.2, "Multicast Configuration" Section 6.14.3, "Configuring a Two-Node Cluster" Section 6.14.4, "Logging" Section 6.14.5, "Configuring Redundant Ring Protocol" You can also use the ccs command to set advanced cluster configuration parameters, including totem options, dlm options, rm options and cman options. For information on setting these parameters see the ccs (8) man page and the annotated cluster configuration file schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html . To view a list of the miscellaneous cluster attributes that have been configured for a cluster, execute the following command: 6.14.1. Cluster Configuration Version A cluster configuration file includes a cluster configuration version value. The configuration version value is set to 1 by default when you create a cluster configuration file and it is automatically incremented each time you modify your cluster configuration. However, if you need to set it to another value, you can specify it with the following command: You can get the current configuration version value on an existing cluster configuration file with the following command: To increment the current configuration version value by 1 in the cluster configuration file on every node in the cluster, execute the following command: | [
"ccs -h host --lsmisc",
"ccs -h host --setversion n",
"ccs -h host --getversion",
"ccs -h host --incversion"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-general-prop-ccs-CA |
20.36. Displaying Per-guest Virtual Machine Information | 20.36. Displaying Per-guest Virtual Machine Information 20.36.1. Displaying the Guest Virtual Machines To display a list of active guest virtual machines and their current states with virsh : Other options available include: --all - Lists all guest virtual machines. For example: Note If no results are displayed when running virsh list --all , it is possible that you did not create the virtual machine as the root user. The virsh list --all command recognizes the following states: running - The running state refers to guest virtual machines that are currently active on a CPU. idle - The idle state indicates that the guest virtual machine is idle, and may not be running or able to run. This can occur when the guest virtual machine is waiting on I/O (a traditional wait state) or has gone to sleep because there was nothing else for it to do. paused - When a guest virtual machine is paused, it consumes memory and other resources, but it is not eligible for scheduling CPU resources from the hypervisor. The paused state occurs after using the paused button in virt-manager or the virsh suspend command. in shutdown - The in shutdown state is for guest virtual machines in the process of shutting down. The guest virtual machine is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest virtual machine operating systems; some operating systems do not respond to these signals. shut off - The shut off state indicates that the guest virtual machine is not running. This can be caused when a guest virtual machine completely shuts down or has not been started. crashed - The crashed state indicates that the guest virtual machine has crashed and can only occur if the guest virtual machine has been configured not to restart on crash. pmsuspended - The guest has been suspended by guest power management. --inactive - Lists guest virtual machines that have been defined but are not currently active. This includes machines that are shut off and crashed . --managed-save - Guests that have managed save state enabled will be listed as saved . Note that to filter guests with this option, you also need to use the --all or --inactive options. --name - The command lists the names of the guests instead of the default table format. This option is mutually exclusive with the --uuid option, which only prints a list of guest UUIDs, and with the --table option, which determines that the table style output should be used. --title - Lists also the guest title field, which typically contains a short description of the guest. This option must be used with the default ( --table ) output format. For example: --persistent - Only persistent guests are included in a list. Use the --transient argument to list transient guests. --with-managed-save - Lists guests that have been configured with a managed save. To list the guests without one, use the --without-managed-save option. --state-running - Lists only guests that are running. Similarly, use --state-paused for paused guests, --state-shutoff for guests that are turned off, and --state-other lists all states as a fallback. --autostart - Only auto-starting guests are listed. To list guests with this feature disabled, use the argument --no-autostart . --with-snapshot - Lists the guests whose snapshot images can be listed. To filter for guests without a snapshot, use the --without-snapshot option. 20.36.2. Displaying Virtual CPU Information To display virtual CPU information from a guest virtual machine with virsh : An example of virsh vcpuinfo output: 20.36.3. Pinning vCPU to a Host Physical Machine's CPU The virsh vcpupin command assigns a virtual CPU to a physical one. The vcpupin command can take the following arguments: --vcpu requires the vcpu number [--cpulist] string lists the host physical machine's CPU number(s) to set, or omit option to query --config affects boot --live affects the running guest virtual machine --current affects the current guest virtual machine state 20.36.4. Displaying Information about the Virtual CPU Counts of a Given Domain The virsh vcpucount command requires a domain name or a domain ID The vcpucount can take the following arguments: --maximum get maximum cap on vcpus --active get number of currently active vcpus --live get value from running guest virtual machine --config get value to be used on boot --current get value according to current guest virtual machine state --guest count that is returned is from the perspective of the guest 20.36.5. Configuring Virtual CPU Affinity To configure the affinity of virtual CPUs with physical CPUs: The domain-id parameter is the guest virtual machine's ID number or name. The vcpu parameter denotes the number of virtualized CPUs allocated to the guest virtual machine.The vcpu parameter must be provided. The cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on. Additional parameters such as --config effect the boot, whereas --live effects the running guest virtual machine and --current affects the current guest virtual machine state. 20.36.6. Configuring Virtual CPU Count Use this command to change the number of virtual CPUs active in a guest virtual machine. By default, this command works on active guest virtual machines. To change the inactive settings that will be used the time a guest virtual machine is started, use the --config flag.To modify the number of CPUs assigned to a guest virtual machine with virsh : # virsh setvcpus {domain-name, domain-id or domain-uuid} count [[--config] [--live] | [--current]] [ --maximum ] [ --guest ] For example: will set the number of vCPUs to guestVM1 to two and this action will be performed while the guestVM1 is running. Important Hot unplugging vCPUs is not supported on Red Hat Enterprise Linux 7. The count value may be limited by host, hypervisor, or a limit coming from the original description of the guest virtual machine. If the --config flag is specified, the change is made to the stored XML configuration for the guest virtual machine, and will only take effect when the guest is started. If --live is specified, the guest virtual machine must be active, and the change takes place immediately. This option will allow hot plugging of a vCPU. Both the --config and --live flags may be specified together if supported by the hypervisor. If --current is specified, the flag affects the current guest virtual machine state. When no flags are specified, the --live flag is assumed. The command will fail if the guest virtual machine is not active. In addition, if no flags are specified, it is up to the hypervisor whether the --config flag is also assumed. This determines whether the XML configuration is adjusted to make the change persistent. The --maximum flag controls the maximum number of virtual CPUs that can be hot-plugged the time the guest virtual machine is booted. Therefore, it can only be used with the --config flag, not with the --live flag. Note that count cannot exceed the number of CPUs assigned to the guest virtual machine. If --guest is specified, the flag modifies the CPU state in the current guest virtual machine. 20.36.7. Configuring Memory Allocation To modify a guest virtual machine's memory allocation with virsh : For example: You must specify the count in kilobytes. The new count value cannot exceed the amount you specified for the guest virtual machine. Values lower than 64 MB are unlikely to work with most guest virtual machine operating systems. A higher maximum memory value does not affect active guest virtual machines. If the new value is lower than the available memory, it will shrink possibly causing the guest virtual machine to crash. This command has the following options domain - specified by a domain name, id, or uuid size - Determines the new memory size, as a scaled integer. The default unit is KiB, but a different one can be specified: Valid memory units include: b or bytes for bytes KB for kilobytes (10 3 or blocks of 1,000 bytes) k or KiB for kibibytes (2 10 or blocks of 1024 bytes) MB for megabytes (10 6 or blocks of 1,000,000 bytes) M or MiB for mebibytes (2 20 or blocks of 1,048,576 bytes) GB for gigabytes (10 9 or blocks of 1,000,000,000 bytes) G or GiB for gibibytes (2 30 or blocks of 1,073,741,824 bytes) TB for terabytes (10 12 or blocks of 1,000,000,000,000 bytes) T or TiB for tebibytes (2 40 or blocks of 1,099,511,627,776 bytes) Note that all values will be rounded up to the nearest kibibyte by libvirt, and may be further rounded to the granularity supported by the hypervisor. Some hypervisors also enforce a minimum, such as 4000KiB (or 4000 x 2 10 or 4,096,000 bytes). The units for this value are determined by the optional attribute memory unit , which defaults to the kibibytes (KiB) as a unit of measure where the value given is multiplied by 2 10 or blocks of 1024 bytes. --config - the command takes effect on the boot --live - the command controls the memory of a running guest virtual machine --current - the command controls the memory on the current guest virtual machine 20.36.8. Changing the Memory Allocation for the Domain The virsh setmaxmem domain size --config --live --current command allows the setting of the maximum memory allocation for a guest virtual machine as shown: The size that can be given for the maximum memory is a scaled integer that by default is expressed in kibibytes, unless a supported suffix is provided. The following arguments can be used with this command: --config - takes affect boot --live - controls the memory of the running guest virtual machine, providing the hypervisor supports this action as not all hypervisors allow live changes of the maximum memory limit. --current - controls the memory on the current guest virtual machine 20.36.9. Displaying Guest Virtual Machine Block Device Information Use the virsh domblkstat command to display block device statistics for a running guest virtual machine. Use the --human to display the statistics in a more user friendly way. 20.36.10. Displaying Guest Virtual Machine Network Device Information Use the virsh domifstat command to display network interface statistics for a running guest virtual machine. | [
"virsh list",
"virsh list --all Id Name State ---------------------------------- 0 Domain-0 running 1 Domain202 paused 2 Domain010 shut off 3 Domain9600 crashed",
"virsh list --title Id Name State Title ---------------------------------------------------------------------------- 0 Domain-0 running Mailserver1 2 rhelvm paused",
"virsh vcpuinfo {domain-id, domain-name or domain-uuid}",
"virsh vcpuinfo guest1 VCPU: 0 CPU: 2 State: running CPU time: 7152.4s CPU Affinity: yyyy VCPU: 1 CPU: 2 State: running CPU time: 10889.1s CPU Affinity: yyyy",
"virsh vcpupin guest1 VCPU: CPU Affinity ---------------------------------- 0: 0-3 1: 0-3",
"virsh vcpucount guest1 maximum config 2 maximum live 2 current config 2 current live 2",
"virsh vcpupin domain-id vcpu cpulist",
"virsh setvcpus guestVM1 2 --live",
"virsh setmem {domain-id or domain-name} count",
"virsh setmem vr-rhel6u1-x86_64-kvm --kilobytes 1025000",
"virsh setmaxmem guest1 1024 --current",
"virsh domblkstat GuestName block-device",
"virsh domifstat GuestName interface-device"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-displaying_per_guest_virtual_machine_information |
14.8.4. Waking Up a Domain from a pmsuspend State | 14.8.4. Waking Up a Domain from a pmsuspend State This command will inject a wake-up alert to a guest that is in a pmsuspend state, rather than waiting for the duration time set to expire. This operation will not fail if the domain is running. This command requires the name of the domain, rhel6 for example as shown. | [
"dompmwakeup rhel6"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-waking_up_a_domain_from_pmsuspend_state |
6.5. Cloning a Virtual Machine | 6.5. Cloning a Virtual Machine You can clone virtual machines without having to create a template or a snapshot first. Procedure Click Compute Virtual Machines and select the virtual machine to clone. Click More Actions ( ), then click Clone VM . Enter a Clone Name for the new virtual machine. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/cloning_a_virtual_machine |
Chapter 20. Booting (IPL) the Installer | Chapter 20. Booting (IPL) the Installer The steps to perform the initial boot (IPL) of the installer depend on the environment (either z/VM or LPAR) in which Red Hat Enterprise Linux will run. For more information on booting, see the Booting Linux chapter in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . 20.1. Installing Under z/VM When installing under z/VM, you can boot from: the z/VM virtual reader a DASD or an FCP-attached SCSI device prepared with the zipl boot loader an FCP-attached SCSI DVD drive Log on to the z/VM guest virtual machine chosen for the Linux installation. You can use x3270 or c3270 (from the x3270-text package in Red Hat Enterprise Linux) to log in to z/VM from other Linux systems. Alternatively, use the 3270 terminal emulator on the IBM System z Hardware Management Console (HMC). If you are working from a machine with a Windows operating system, Jolly Giant ( http://www.jollygiant.com/ ) offers an SSL-enabled 3270 emulator. A free native Windows port of c3270 called wc3270 also exists. Note If your 3270 connection is interrupted and you cannot log in again because the session is still active, you can replace the old session with a new one by entering the following command on the z/VM logon screen: Replace user with the name of the z/VM guest virtual machine. Depending on whether an external security manager, for example RACF, is used, the logon command might vary. If you are not already running CMS (single user operating system shipped with z/VM) in your guest, boot it now by entering the command: Be sure not to use CMS disks such as your A disk (often device number 0191) as installation targets. To find out which disks are in use by CMS use the following query: You can use the following CP (z/VM Control Program, which is the z/VM hypervisor) query commands to find out about the device configuration of your z/VM guest virtual machine: Query the available main memory, which is called storage in System z terminology. Your guest should have at least 512 megabytes of main memory. Query available network devices of type: osa OSA (CHPID type OSD, real or virtual (VSWITCH or GuestLAN type QDIO), both in QDIO mode) hsi HiperSockets (CHPID type IQD, real or virtual (GuestLAN type Hipers)) lcs LCS (CHPID type OSE) For example, to query all of the network device types mentioned above: Query available DASDs. Only those that are flagged RW for read-write mode can be used as installation targets: Query available FCP channels: 20.1.1. Using the z/VM Reader Perform the following steps to boot from the z/VM reader: If necessary, add the device containing the z/VM TCP/IP tools to your CMS disk list. For example: Replace fm with any FILEMODE letter. Execute the command: Where host is the hostname or IP address of the FTP server that hosts the boot images ( kernel.img and initrd.img ). Log in and execute the following commands. Use the (repl option if you are overwriting existing kernel.img , initrd.img , generic.prm , or redhat.exec files: Optionally check whether the files were transferred correctly by using the CMS command filelist to show the received files and their format. It is important that kernel.img and initrd.img have a fixed record length format denoted by F in the Format column and a record length of 80 in the Lrecl column. For example: Press PF3 to quit filelist and return to the CMS prompt. Finally execute the REXX script redhat.exec to boot (IPL) the installer: 20.1.2. Using a Prepared DASD Boot from the prepared DASD and select the zipl boot menu entry referring to the Red Hat Enterprise Linux installer. Use a command of the following form: Replace DASD device number with the device number of the boot device, and boot_entry_number with the zipl configuration menu for this device. For example: 20.1.3. Using a Prepared FCP-attached SCSI Disk Perform the following steps to boot from a prepared FCP-attached SCSI disk: Configure the SCSI boot loader of z/VM to access the prepared SCSI disk in the FCP storage area network. Select the prepared zipl boot menu entry referring to the Red Hat Enterprise Linux installer. Use a command of the following form: Replace WWPN with the WWPN of the storage system and LUN with the LUN of the disk. The 16-digit hexadecimal numbers must be split into two pairs of eight digits each. For example: Optionally, confirm your settings with the command: IPL the FCP device connected with the storage system containing the disk with the command: For example: 20.1.4. Using an FCP-attached SCSI DVD Drive This requires a SCSI DVD drive attached to an FCP-to-SCSI bridge which is in turn connected to an FCP adapter in your System z. The FCP adapter must be configured and available under z/VM. Insert your Red Hat Enterprise Linux for System z DVD into the DVD drive. Configure the SCSI boot loader of z/VM to access the DVD drive in the FCP storage area network and specify 1 for the boot entry on the Red Hat Enterprise Linux for System z DVD. Use a command of the following form: Replace WWPN with the WWPN of the FCP-to-SCSI bridge and FCP_LUN with the LUN of the DVD drive. The 16-digit hexadecimal numbers must be split into two pairs of eight characters each. For example: Optionally, confirm your settings with the command: IPL on the FCP device connected with the FCP-to-SCSI bridge. For example: | [
"logon user here",
"#cp ipl cms",
"query disk",
"cp query virtual storage",
"cp query virtual osa",
"cp query virtual dasd",
"cp query virtual fcp",
"cp link tcpmaint 592 592 acc 592 fm",
"ftp host",
"cd /location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit",
"VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17",
"redhat",
"cp ipl DASD device number loadparm boot_entry_number",
"cp ipl eb1c loadparm 0",
"cp set loaddev portname WWPN lun LUN bootprog boot_entry_number",
"cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0",
"query loaddev",
"cp ipl FCP_device",
"cp ipl fc00",
"cp set loaddev portname WWPN lun FCP_LUN bootprog 1",
"cp set loaddev portname 20010060 eb1c0103 lun 00010000 00000000 bootprog 1",
"cp query loaddev",
"cp ipl FCP_device",
"cp ipl fc00"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-s390-steps-boot |
Chapter 11. Transactions | Chapter 11. Transactions 11.1. About Java Transaction API Red Hat JBoss Data Grid supports configuring, use of, and participation in Java Transaction API (JTA) compliant transactions. JBoss Data Grid does the following for each cache operation: First, it retrieves the transactions currently associated with the thread. If not already done, it registers an XAResource with the transaction manager to receive notifications when a transaction is committed or rolled back. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-transactions |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.4/rn-openjdk-temurin-support-policy |
Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference Overview | Chapter 1. Red Hat High Availability Add-On Configuration and Management Reference Overview This document provides descriptions of the options and features that the Red Hat High Availability Add-On using Pacemaker supports. For a step by step basic configuration example, see Red Hat High Availability Add-On Administration . You can configure a Red Hat High Availability Add-On cluster with the pcs configuration interface or with the pcsd GUI interface. 1.1. New and Changed Features This section lists features of the Red Hat High Availability Add-On that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. The pcs resource cleanup command can now reset the resource status and failcount for all resources, as documented in Section 6.11, "Cluster Resources Cleanup" . You can specify a lifetime parameter for the pcs resource move command, as documented in Section 8.1, "Manually Moving Resources Around the Cluster" . As of Red Hat Enterprise Linux 7.1, you can use the pcs acl command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). For information on ACLs, see Section 4.5, "Setting User Permissions" . Section 7.2.3, "Ordered Resource Sets" and Section 7.3, "Colocation of Resources" have been extensively updated and clarified. Section 6.1, "Resource Creation" documents the disabled parameter of the pcs resource create command, to indicate that the resource being created is not started automatically. Section 10.1, "Configuring Quorum Options" documents the new cluster quorum unblock feature, which prevents the cluster from waiting for all nodes when establishing quorum. Section 6.1, "Resource Creation" documents the before and after parameters of the pcs resource create command, which can be used to configure resource group ordering. As of the Red Hat Enterprise Linux 7.1 release, you can backup the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the backup and restore options of the pcs config command. For information on this feature, see Section 3.8, "Backing Up and Restoring a Cluster Configuration" . Small clarifications have been made throughout this document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. You can now use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, "Moving a Resource to its Preferred Node" . Section 13.2, "Event Notification with Monitoring Resources" has been modified and expanded to better document how to configure the ClusterMon resource to execute an external program to determine what to do with cluster notifications. When configuring fencing for redundant power supplies, you now are only required to define each device once and to specify that both devices are required to fence the node. For information on configuring fencing for redundant power supplies, see Section 5.10, "Configuring Fencing for Redundant Power Supplies" . This document now provides a procedure for adding a node to an existing cluster in Section 4.4.3, "Adding Cluster Nodes" . The new resource-discovery location constraint option allows you to indicate whether Pacemaker should perform resource discovery on a node for a specified resource, as documented in Table 7.1, "Simple Location Constraint Options" . Small clarifications and corrections have been made throughout this document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. Section 9.4, "The pacemaker_remote Service" , has been wholly rewritten for this version of the document. You can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. Pacemaker alert agents are described in Section 13.1, "Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later)" . New quorum administration commands are supported with this release which allow you to display the quorum status and to change the expected_votes parameter. These commands are described in Section 10.2, "Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later)" . You can now modify general quorum options for your cluster with the pcs quorum update command, as described in Section 10.3, "Modifying Quorum Options (Red Hat Enterprise Linux 7.3 and later)" . You can configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. This feature is provided for technical preview only. For information on quorum devices, see Section 10.5, "Quorum Devices" . Red Hat Enterprise Linux release 7.3 provides the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. This feature is provided for technical preview only. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker . When configuring a KVM guest node running a the pacemaker_remote service, you can include guest nodes in groups, which allows you to group a storage device, file system, and VM. For information on configuring KVM guest nodes, see Section 9.4.5, "Configuration Overview: KVM Guest Node" . Additionally, small clarifications and corrections have been made throughout this document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux release 7.4 provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. For information on the Booth cluster ticket manager, see Chapter 14, Configuring Multi-Site Clusters with Pacemaker . Red Hat Enterprise Linux 7.4 provides full support for the ability to configure a separate quorum device which acts as a third-party arbitration device for the cluster. The primary use of this feature is to allow a cluster to sustain more node failures than standard quorum rules allow. For information on quorum devices, see Section 10.5, "Quorum Devices" . You can now specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For information on configuring fencing levels, see Section 5.9, "Configuring Fencing Levels" . Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent, which can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. For information on this resource agent, see Section 9.6.5, "The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later)" . For Red Hat Enterprise Linux 7.4, the cluster node add-guest and the cluster node remove-guest commands replace the cluster remote-node add and cluster remote-node remove commands. The pcs cluster node add-guest command sets up the authkey for guest nodes and the pcs cluster node add-remote command sets up the authkey for remote nodes. For updated guest and remote node configuration procedures, see Section 9.3, "Configuring a Virtual Domain as a Resource" . Red Hat Enterprise Linux 7.4 supports the systemd resource-agents-deps target. This allows you to configure the appropriate startup order for a cluster that includes resources with dependencies that are not themselves managed by the cluster, as described in Section 9.7, "Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later)" . The format for the command to create a resource as a master/slave clone has changed for this release. For information on creating a master/slave clone, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . 1.1.5. New and Changed Features for Red Hat Enterprise Linux 7.5 Red Hat Enterprise Linux 7.5 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.5, you can use the pcs_snmp_agent daemon to query a Pacemaker cluster for data by means of SNMP. For information on querying a cluster with SNMP, see Section 9.8, "Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)" . 1.1.6. New and Changed Features for Red Hat Enterprise Linux 7.8 Red Hat Enterprise Linux 7.8 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. For information on configuring resources to remain stopped on clean node shutdown, see Section 9.9, " Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) " . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-overview-HAAR |
Chapter 40. Managing subID ranges manually | Chapter 40. Managing subID ranges manually In a containerized environment, sometimes an IdM user needs to assign subID ranges manually. The following instructions describe how to manage the subID ranges. 40.1. Generating subID ranges using IdM CLI As an Identity Management (IdM) administrator, you can generate a subID range and assign it to IdM users. Prerequisites The IdM users exist. You have obtained an IdM admin ticket-granting ticket (TGT). See Using kinit to log in to IdM manually for more details. You have root access to the IdM host where you are executing the procedure. Procedure Optional: Check for existing subID ranges: If a subID range does not exist, select one of the following options: Generate and assign a subID range to an IdM user: Generate and assign subID ranges to all IdM users: Optional: Assign subID ranges to new IdM users by default: Verification Verify that the user has a subID range assigned: 40.2. Generating subID ranges using IdM WebUI interface As an Identity Management (IdM) administrator, you can generate a subID range and assign it to a user in the IdM WebUI interface. Prerequisites The IdM user exists. You have obtained an IdM admin Kerberos ticket (TGT). See Logging in to IdM in the Web UI: Using a Kerberos ticket for more details. You have root access to the IdM host where you are executing the procedure. Procedure In the IdM WebUI interface expand the Subordinate IDs tab and choose the Subordinate IDs option. When the Subordinate IDs interface appears, click the Add button in the upper-right corner of the interface. The Add subid window appears. In the Add subid window choose an owner, that is the user to whom you want to assign a subID range. Click the Add button. Verification View the table under the Subordinate IDs tab. A new record shows in the table. The owner is the user to whom you assigned the subID range. 40.3. Viewing subID information about IdM users by using IdM CLI As an Identity Management (IdM) user, you can search for IdM user subID ranges and view the related information. Prerequisites You have configured a subID range on the IdM client . You have obtained an IdM user ticket-granting ticket (TGT). See Using kinit to log in to IdM manually for more details. Procedure To view the details about a subID range: If you know the unique ID hash of the Identity Management (IdM) user that is the owner of the range: If you know a specific subID from that range: 40.4. Listing subID ranges using the getsubid command As a system administrator, you can use the command line to list the subID ranges of Identity Management (IdM) or local users. Prerequisites The idmuser user exists in IdM. The shadow-utils-subid package is installed. You can edit the /etc/nsswitch.conf file. Procedure Open the /etc/nsswitch.conf file and configure the shadow-utils utility to use IdM subID ranges by setting the subid variable to the sss value: Note You can provide only one value for the subid field. Setting the subid field to the file value or no value instead of sss configures the shadow-utils utility to use the subID ranges from the /etc/subuid and /etc/subgid files. List the subID range for an IdM user: The first value, 2147483648, indicates the subID range start. The second value, 65536, indicates the size of the range. | [
"ipa subid-find",
"ipa subid-generate --owner=idmuser Added subordinate id \"359dfcef-6b76-4911-bd37-bb5b66b8c418\" Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Description: auto-assigned subid Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"/usr/libexec/ipa/ipa-subids --all-users Found 2 user(s) without subordinate ids Processing user 'user4' (1/2) Processing user 'user5' (2/2) Updated 2 user(s) The ipa-subids command was successful",
"ipa config-mod --user-default-subid=True",
"ipa subid-find --owner=idmuser 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"ipa subid-show 359dfcef-6b76-4911-bd37-bb5b66b8c418 Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"ipa subid-match --subuid=2147483670 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: uid=idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"[...] subid: sss",
"getsubids idmuser 0: idmuser 2147483648 65536"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_managing-subid-ranges-manually_managing-users-groups-hosts |
3.4. Logical Volume Backup | 3.4. Logical Volume Backup Metadata backups and archives are automatically created on every volume group and logical volume configuration change unless disabled in the lvm.conf file. By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. How long the the metadata archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm directory in the backup. Note that a metadata backup does not back up the user and system data contained in the logical volumes. You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. You can restore metadata with the vgcfgrestore command. The vgcfgbackup and vgcfgrestore commands are described in Section 4.3.11, "Backing Up Volume Group Metadata" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/backup |
Migrating from version 3 to 4 | Migrating from version 3 to 4 OpenShift Container Platform 4.7 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/index |
B.15. No Guest Virtual Machines are Present when libvirtd is Started | B.15. No Guest Virtual Machines are Present when libvirtd is Started Symptom The libvirt daemon is successfully started, but no guest virtual machines appear to be present. Investigation There are various possible causes of this problem. Performing these tests will help to determine the cause of this situation: Verify KVM kernel modules Verify that KVM kernel modules are inserted in the kernel: If you are using an AMD machine, verify the kvm_amd kernel modules are inserted in the kernel instead, using the similar command lsmod | grep kvm_amd in the root shell. If the modules are not present, insert them using the modprobe <modulename> command. Note Although it is uncommon, KVM virtualization support may be compiled into the kernel. In this case, modules are not needed. Verify virtualization extensions Verify that virtualization extensions are supported and enabled on the host: Enable virtualization extensions in your hardware's firmware configuration within the BIOS setup. Refer to your hardware documentation for further details on this. Verify client URI configuration Verify that the URI of the client is configured as desired: For example, this message shows the URI is connected to the VirtualBox hypervisor, not QEMU , and reveals a configuration error for a URI that is otherwise set to connect to a QEMU hypervisor. If the URI was correctly connecting to QEMU , the same message would appear instead as: This situation occurs when there are other hypervisors present, which libvirt may speak to by default. Solution After performing these tests, use the following command to view a list of guest virtual machines: | [
"virsh list --all Id Name State ---------------------------------------------------- #",
"lsmod | grep kvm kvm_intel 121346 0 kvm 328927 1 kvm_intel",
"egrep \"(vmx|svm)\" /proc/cpuinfo flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save flags : fpu vme de pse tsc ... svm ... skinit wdt npt lbrv svm_lock nrip_save",
"virsh uri vbox:///system",
"virsh uri qemu:///system",
"virsh list --all"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_No_Guest_Machines |
Chapter 81. KafkaConnectSpec schema reference | Chapter 81. KafkaConnectSpec schema reference Used in: KafkaConnect Full list of KafkaConnectSpec schema properties Configures a Kafka Connect cluster. The config properties are one part of the overall configuration for the resource. Use the config properties to configure Kafka Connect options as keys. Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ... The values can be one of the following JSON types: String Number Boolean Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config properties. Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Listener and REST interface configuration Plugin path configuration Properties with the following prefixes cannot be set: bootstrap.servers consumer.interceptor.classes listeners. plugin.path producer.interceptor.classes rest. sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Connect, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. 81.1. Logging Kafka Connect has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 81.2. KafkaConnectSpec schema properties Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. bootstrapServers string Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> :_<port>_ pairs. tls ClientTls TLS configuration. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for Kafka Connect. config map The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka MirrorMaker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration The externalConfiguration property has been deprecated. The external configuration is deprecated and will be removed in the future. Please use the template section instead to configure additional environment variables or volumes. Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. build Build Configures how the Connect container image should be built. Optional. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 #",
"curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaconnectspec-reference |
3.4. Increasing the Size of an XFS File System | 3.4. Increasing the Size of an XFS File System An XFS file system may be grown while mounted using the xfs_growfs command: The -D size option grows the file system to the specified size (expressed in file system blocks). Without the -D size option, xfs_growfs will grow the file system to the maximum size supported by the device. Before growing an XFS file system with -D size , ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device. Note While XFS file systems can be grown while mounted, their size cannot be reduced at all. For more information about growing a file system, see man xfs_growfs . | [
"xfs_growfs /mount/point -D size"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsgrow |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-view role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: Note The following example contains some values that are specific to OpenShift Container Platform on AWS. USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n \ openshift-monitoring -o jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}' \ --data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \ --data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \ --data-urlencode 'match[]={__name__="os_image_url_override:sum"}' 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Registering your disconnected cluster Register your disconnected OpenShift Container Platform cluster on the Red Hat Hybrid Cloud Console so that your cluster is not impacted by the consequences listed in the section named "Consequences of disabling remote health reporting". Important By registering your disconnected cluster, you can continue to report your subscription usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends associated with your subscription, so that you can use the returned information to better organize subscription allocations across all of your resources. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . You can log in to the Red Hat Hybrid Cloud Console. Procedure Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console. Optional: To access the Register disconnected cluster web page from the home page of the Red Hat Hybrid Cloud Console, go to the Clusters navigation menu item and then select the Register cluster button. Enter your cluster's details in the provided fields on the Register disconnected cluster page. From the Subscription settings section of the page, select the subcription settings that apply to your Red Hat subscription offering. To register your disconnected cluster, select the Register cluster button. Additional resources Consequences of disabling remote health reporting How does the subscriptions service show my subscription data? (Getting Started with the Subscription Service) 4.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Enabling remote health reporting If you or your organization have disabled remote health reporting, you can enable this feature again. You can see that remote health reporting is disabled from the message "Insights not available" in the Status tile on the OpenShift Container Platform Web Console Overview page. To enable remote health reporting, you must Modify the global cluster pull secret with a new authorization token. Note Enabling remote health reporting enables both Insights Operator and Telemetry. 4.4.1. Modifying your global cluster pull secret to enable remote health reporting You can modify your existing global cluster pull secret to enable remote health reporting. If you have previously disabled remote health monitoring, you must first download a new pull secret with your console.openshift.com access token from Red Hat OpenShift Cluster Manager. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to OpenShift Cluster Manager. Procedure Navigate to https://console.redhat.com/openshift/downloads . From Tokens Pull Secret , click Download . The file pull-secret.txt containing your cloud.openshift.com access token in JSON format downloads: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": " <email_address> " } } } Download the global cluster pull secret to your local file system. USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull-secret Make a backup copy of your pull secret. USD cp pull-secret pull-secret-backup Open the pull-secret file in a text editor. Append the cloud.openshift.com JSON entry from pull-secret.txt into auths . Save the file. Update the secret in your cluster. oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret It may take several minutes for the secret to update and your cluster to begin reporting. Verification Navigate to the OpenShift Container Platform Web Console Overview page. Insights in the Status tile reports the number of issues found. 4.5. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.5.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.5.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.5.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.5.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.5.5.1. Filtering Insights advisor recommendations As an OpenShift Container Platform cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.5.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.5.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.5.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.5.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.6. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.6.1. Configuring Insights Operator Insights Operator configuration is a combination of the default Operator configuration and the configuration that is stored in either the insights-config ConfigMap object in the openshift-insights namespace, OR in the support secret in the openshift-config namespace. When a ConfigMap object or support secret exists, the contained attribute values override the default Operator configuration values. If both a ConfigMap object and a support secret exist, the Operator reads the ConfigMap object. The ConfigMap object does not exist by default, so an OpenShift Container Platform cluster administrator must create it. ConfigMap object configuration structure This example of an insights-config ConfigMap object ( config.yaml configuration) shows configuration options using standard YAML formatting. Configurable attributes and default values The table below describes the available configuration attributes: Note The insights-config ConfigMap object follows standard YAML formatting, wherein child values are below the parent attribute and indented two spaces. For the Obfuscation attribute, enter values as bulleted children of the parent attribute. Table 4.1. Insights Operator configurable attributes Attribute name Description Value type Default value Obfuscation: - networking Enables the global obfuscation of IP addresses and the cluster domain name. Boolean false Obfuscation: - workload_names Obfuscate data coming from the Deployment Validation Operator if it is installed. Boolean false sca: interval Specifies the frequency of the simple content access entitlements download. Time interval 8h sca: disabled Disables the simple content access entitlements download. Boolean false alerting: disabled Disables Insights Operator alerts to the cluster Prometheus instance. Boolean false httpProxy , httpsProxy , noProxy Set custom proxy for Insights Operator URL No default 4.6.1.1. Creating the insights-config ConfigMap object This procedure describes how to create the insights-config ConfigMap object for the Insights Operator to set custom configurations. Important Red Hat recommends you consult Red Hat Support before making changes to the default Insights Operator configuration. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with cluster-admin role. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click Create ConfigMap . Select Configure via: YAML view and enter your configuration preferences, for example apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false Optional: Select Form view and enter the necessary information that way. In the ConfigMap Name field, enter insights-config . In the Key field, enter config.yaml . For the Value field, either browse for a file to drag and drop into the field or enter your configuration parameters manually. Click Create and you can see the ConfigMap object and configuration information. 4.6.2. Understanding Insights Operator alerts The Insights Operator declares alerts through the Prometheus monitoring system to the Alertmanager. You can view these alerts in the Alerting UI in the OpenShift Container Platform web console by using one of the following methods: In the Administrator perspective, click Observe Alerting . In the Developer perspective, click Observe <project_name> Alerts tab. Currently, Insights Operator sends the following alerts when the conditions are met: Table 4.2. Insights Operator alerts Alert Description InsightsDisabled Insights Operator is disabled. SimpleContentAccessNotAvailable Simple content access is not enabled in Red Hat Subscription Management. InsightsRecommendationActive Insights has an active recommendation for the cluster. 4.6.2.1. Disabling Insights Operator alerts To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you create or edit the insights-config ConfigMap object. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. If the insights-config ConfigMap object does not exist, you must create it when you first add custom configurations. Note that configurations within the ConfigMap object take precedence over the default settings defined in the config/pod.yaml file. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: true . After you save the changes, Insights Operator no longer sends alerts to the cluster Prometheus instance. 4.6.2.2. Enabling Insights Operator alerts When alerts are disabled, the Insights Operator no longer sends alerts to the cluster Prometheus instance. You can reenable them. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: false . After you save the changes, Insights Operator again sends alerts to the cluster Prometheus instance. 4.6.3. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.6.4. Running an Insights Operator gather operation You can run Insights Operator data gather operations on demand. The following procedures describe how to run the default list of gather operations using the OpenShift web console or CLI. You can customize the on demand gather function to exclude any gather operations you choose. Disabling gather operations from the default list degrades Insights Advisor's ability to offer effective recommendations for your cluster. If you have previously disabled Insights Operator gather operations in your cluster, this procedure will override those parameters. Important The DataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. 4.6.4.1. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6.4.2. Running an Insights Operator gather operation using the web console You can run an Insights Operator gather operation using the OpenShift Container Platform web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the DataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click Create DataGather . To create a new DataGather operation, edit the configuration file: apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Replace the <your_data_gather> with a unique name for your gather operation. 2 Enter individual gather operations to disable under the gatherers parameter. This example disables the workloads data gather operation and will run the remainder of the default operations. To run the complete list of default gather operations, leave the spec parameter empty. You can find the complete list of gather operations in the Insights Operator documentation. Click Save . Verification Navigate to Workloads Pods . On the Pods page, select the Project pulldown menu, and then turn on Show default projects. Select the openshift-insights project from the Project pulldown menu. Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. 4.6.4.3. Running an Insights Operator gather operation using the OpenShift CLI You can run an Insights Operator gather operation using the OpenShift Container Platform command line interface. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Enter the following command to run the gather operation: USD oc apply -f <your_datagather_definition>.yaml Replace <your_datagather_definition>.yaml with a configuration file using the following parameters: apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Replace the <your_data_gather> with a unique name for your gather operation. 2 Enter individual gather operations to disable under the gatherers parameter. This example disables the workloads data gather operation and will run the remainder of the default operations. To run the complete list of default gather operations, leave the spec parameter empty. You can find the complete list of gather operations in the Insights Operator documentation. Verification Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. 4.6.4.4. Disabling the Insights Operator gather operations You can disable the Insights Operator gather operations. Disabling the gather operations gives you the ability to increase privacy for your organization as Insights Operator will no longer gather and send Insights cluster reports to Red Hat. This will disable Insights analysis and recommendations for your cluster without affecting other core functions that require communication with Red Hat such as cluster transfers. You can view a list of attempted gather operations for your cluster from the /insights-operator/gathers.json file in your Insights Operator archive. Be aware that some gather operations only occur when certain conditions are met and might not appear in your most recent archive. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Disable the gather operations by performing one of the following edits to the InsightsDataGather configuration file: To disable all the gather operations, enter all under the disabledGatherers key: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: 1 gatherConfig: disabledGatherers: - all 2 1 The spec parameter specifies gather configurations. 2 The all value disables all gather operations. To disable individual gather operations, enter their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Example individual gather operation Click Save . After you save the changes, the Insights Operator gather configurations are updated and the operations will no longer occur. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.4.5. Enabling the Insights Operator gather operations You can enable the Insights Operator gather operations, if the gather operations have been disabled. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Enable the gather operations by performing one of the following edits: To enable all disabled gather operations, remove the gatherConfig stanza: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: gatherConfig: 1 disabledGatherers: all 1 Remove the gatherConfig stanza to enable all gather operations. To enable individual gather operations, remove their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Remove one or more gather operations. Click Save . After you save the changes, the Insights Operator gather configurations are updated and the affected gather operations start. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.5. Obfuscating Deployment Validation Operator data Cluster administrators can configure the Insight Operator to obfuscate data from the Deployment Validation Operator (DVO), if the Operator is installed. When the workload_names value is added to the insights-config ConfigMap object, workload names-rather than UIDs-are displayed in Insights for Openshift, making them more recognizable for cluster administrators. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console with the "cluster-admin" role. The insights-config ConfigMap object exists in the openshift-insights namespace. The cluster is self managed and the Deployment Validation Operator is installed. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the obfuscation attribute with the workload_names value. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | dataReporting: obfuscation: - workload_names # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml obfuscation attribute is set to - workload_names . 4.7. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.7.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Example output apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights # ... spec: template: # ... spec: containers: - args: # ... image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 # ... 1 Specifies your insights-operator image version. Paste your image version in gather-job.yaml : apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job # ... spec: # ... template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts: 1 Replace any existing value with your insights-operator image version. Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Name: insights-operator-job Namespace: openshift-insights # ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job> where insights-operator-job-<your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.7.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Clusters menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.7.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.8. Importing simple content access entitlements with Insights Operator Insights Operator periodically imports your simple content access entitlements from OpenShift Cluster Manager and stores them in the etc-pki-entitlement secret in the openshift-config-managed namespace. Simple content access is a capability in Red Hat subscription tools which simplifies the behavior of the entitlement tooling. This feature makes it easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. The Insights Operator imports simple content access entitlements every eight hours, but can be configured or disabled using the insights-config ConfigMap object in the openshift-insights namespace. Note Simple content access must be enabled in Red Hat Subscription Management for the importing to function. Additional resources See About simple content access in the Red Hat Subscription Central documentation, for more information about simple content access. See Using Red Hat subscriptions in builds for more information about using simple content access entitlements in OpenShift Container Platform builds. 4.8.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the simple content access (sca) entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. The entitlement import normally occurs every eight hours, but you can shorten this sca interval if you update your simple content access configuration in the insights-config ConfigMap object. This procedure describes how to update the import interval to two hours (2h). You can specify hours (h) or hours and minutes, for example: 2h30m. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. Set the sca attribute in the file to interval: 2h to import content every two hours. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: interval: 2h # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to interval: 2h . 4.8.2. Disabling simple content access import You can disable the importing of simple content access entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: true . 4.8.3. Enabling a previously disabled simple content access import If the importing of simple content access entitlements is disabled, the Insights Operator does not import simple content access entitlements. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: false . | [
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"oc apply -f <your_datagather_definition>.yaml",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/support/remote-health-monitoring-with-connected-clusters |
Chapter 1. Block Storage backup service overview | Chapter 1. Block Storage backup service overview The Block Storage service (cinder) of the Red Hat OpenStack Platform (RHOSP) provides an optional backup service that you can deploy on Controller nodes. You can use the Block Storage backup service to create and restore full or incremental backups of your Block Storage volumes. A volume backup is a persistent copy of the contents of a Block Storage volume that is saved to a backup repository. Some features of the Block Storage backup service can impact the performance of the backups. For more information, see Backup performance considerations . 1.1. Backup repository back ends By default, your backup repository uses the Red Hat OpenStack Platform Object Storage service (swift) back end and the volume backups are created as object stores. However, you can choose to use Red Hat Ceph Storage, NFS, or S3 as your backup repository back end. The Block Storage backup service can back up volumes on any back end that the Block Storage service (cinder) supports, regardless of which back end you choose to use for your backup repository. 1.2. Block Storage volume backup metadata When you create a backup of a Block Storage volume, the metadata for this backup is stored in the Block Storage service database. The Block Storage backup service uses this metadata when it restores the volume from the backup. Important To ensure that a backup survives a catastrophic loss of the Block Storage service database, you can manually export and store the metadata of this backup. After a catastrophic database loss, you need to create a new Block Storage database and then manually re-import this backup metadata into it. For more information, see Protecting your backups . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/backing_up_block_storage_volumes/assembly_backup-overview |
Chapter 6. Expanding persistent volumes | Chapter 6. Expanding persistent volumes 6.1. Enabling volume expansion support Before you can expand persistent volumes, the StorageClass object must have the allowVolumeExpansion field set to true . Procedure Edit the StorageClass object and add the allowVolumeExpansion attribute by running the following command: USD oc edit storageclass <storage_class_name> 1 1 Specifies the name of the storage class. The following example demonstrates adding this line at the bottom of the storage class configuration. apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1 1 Setting this attribute to true allows PVCs to be expanded after creation. 6.2. Expanding CSI volumes You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created. OpenShift Container Platform supports CSI volume expansion by default. However, a specific CSI driver is required. OpenShift Container Platform 4.9 supports version 1.1.0 of the CSI specification . Important Expanding CSI volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.3. Expanding FlexVolume with a supported driver When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform. FlexVolume allows expansion if the driver is set with RequiresFSResize to true . The FlexVolume can be expanded on pod restart. Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. Prerequisites The underlying volume driver supports resize. The driver is set with the RequiresFSResize capability to true . Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . Procedure To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin interface using these methods: RequiresFSResize If true , updates the capacity directly. If false , calls the ExpandFS method to finish the filesystem resize. ExpandFS If true , calls ExpandFS to resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize. Important Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume. 6.4. Expanding persistent volume claims (PVCs) with a file system Expanding PVCs based on volume types that need file system resizing, such as GCE PD, EBS, and Cinder, is a two-step process. This process involves expanding volume objects in the cloud provider, and then expanding the file system on the actual node. Expanding the file system on the node only happens when a new pod is started with the volume. Prerequisites The controlling StorageClass object must have allowVolumeExpansion set to true . Procedure Edit the PVC and request a new size by editing spec.resources.requests . For example, the following expands the ebs PVC to 8 Gi. kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1 1 Updating spec.resources.requests to a larger amount will expand the PVC. After the cloud provider object has finished resizing, the PVC is set to FileSystemResizePending . Check the condition by entering the following command: USD oc describe pvc <pvc_name> When the cloud provider object has finished resizing, the PersistentVolume object reflects the newly requested size in PersistentVolume.Spec.Capacity . At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the FileSystemResizePending condition is removed from the PVC. 6.5. Recovering from failure when expanding volumes If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention. Procedure Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain . Delete the PVC. This will be recreated later. To ensure that the newly created PVC can bind to the PV marked Retain , manually edit the PV and delete the claimRef entry from the PV specs. This marks the PV as Available . Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only. Restore the reclaim policy on the PV. | [
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/storage/expanding-persistent-volumes |
4.8. Multiple Insert Batches | 4.8. Multiple Insert Batches If you work with request-level transactions and issue INSERTs with a query expression (or the deprecated SELECT INTO), you may find that multiple insert batches handled by separate source INSERTs are processed by the Red Hat JBoss Data Virtualization server. Take care to ensure that targeted sources support XA or that compensating actions are taken in the event of a failure. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/multiple_insert_batches1 |
Chapter 28. Configuring Core Bridges | Chapter 28. Configuring Core Bridges The function of a bridge is to consume messages from one destination and forward them to another one, typically on a different JBoss EAP messaging server. The source and target servers do not have to be in the same cluster which makes bridging suitable for reliably sending messages from one cluster to another, for instance across a WAN, or internet and where the connection may be unreliable. The bridge has built-in resilience to failure so if the target server connection is lost, for example, due to network failure, the bridge will retry connecting to the target until it comes back online. When it comes back online it will resume operation as normal. Bridges are a way to reliably connect two separate JBoss EAP messaging servers together. With a core bridge both source and target servers must be JBoss EAP 7 messaging servers. Note Do not confuse a core bridge with a Jakarta Messaging bridge. A core bridge is used to bridge any two JBoss EAP messaging instances and uses the core API. A Jakarta Messaging bridge can be used to bridge any two Jakarta Messaging 2.0 compliant Jakarta Messaging providers and uses the Jakarta Messaging API. It is preferable to use a core bridge instead of a Jakarta Messaging bridge whenever possible. Below is an example configuration of a JBoss EAP messaging core bridge. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <bridge name="my-core-bridge" static-connectors="bridge-connector" queue-name="jms.queue.InQueue"/> ... </server> </subsystem> This core bridge can be added using the following management CLI command. Note that when defining a core bridge, you must define a queue-name and either static-connectors or discovery-group . See the table in the appendix for a full list of configurable attributes. 28.1. Configuring a Core Bridge for Duplicate Detection Core bridges can be configured to automatically add a unique duplicate ID value, if there is not already one in the message, before forwarding the message to the target. To configure a core bridge for duplicate message detection set the use-duplicate-detection attribute to true , which is the default value. | [
"<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <bridge name=\"my-core-bridge\" static-connectors=\"bridge-connector\" queue-name=\"jms.queue.InQueue\"/> </server> </subsystem>",
"/subsystem=messaging-activemq/server=default/bridge=my-core-bridge:add(static-connectors=[bridge-connector],queue-name=jms.queue.InQueue)",
"/subsystem=messaging-activemq/server=default/bridge=my-core-bridge:write-attribute(name=use-duplicate-detection,value=true)"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/configuring_core_bridges |
7.2. Caching | 7.2. Caching Caching options can be configured with virt-manager during guest installation, or on an existing guest virtual machine by editing the guest XML configuration. Table 7.1. Caching options Caching Option Description Cache=none I/O from the guest is not cached on the host, but may be kept in a writeback disk cache. Use this option for guests with large I/O requirements. This option is generally the best choice, and is the only option to support migration. Cache=writethrough I/O from the guest is cached on the host but written through to the physical medium. This mode is slower and prone to scaling problems. Best used for small number of guests with lower I/O requirements. Suggested for guests that do not support a writeback cache (such as Red Hat Enterprise Linux 5.5 and earlier), where migration is not needed. Cache=writeback I/O from the guest is cached on the host. Cache=directsync Similar to writethrough , but I/O from the guest bypasses the host page cache. Cache=unsafe The host may cache all disk I/O, and sync requests from guest are ignored. Cache=default If no cache mode is specified, the system's default settings are chosen. In virt-manager , the caching mode can be specified under Virtual Disk . For information on using virt-manager to change the cache mode, see Section 3.3, "Virtual Disk Performance Options" To configure the cache mode in the guest XML, edit the cache setting inside the driver tag to specify a caching option. For example, to set the cache as writeback : <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-blockio-caching |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/conscious-language-message_automating-sap-hana-scale-out |
Using the AMQ OpenWire JMS Client | Using the AMQ OpenWire JMS Client Red Hat AMQ 2021.Q1 For Use with AMQ Clients 2.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/index |
Chapter 2. The value of registering your RHEL system to Red Hat | Chapter 2. The value of registering your RHEL system to Red Hat Registration establishes an authorized connection between your system and Red Hat. Red Hat issues the registered system, whether a physical or virtual machine, a certificate that identifies and authenticates the system so that it can receive protected content, software updates, security patches, support, and managed services from Red Hat. With a valid subscription, you can register a Red Hat Enterprise Linux (RHEL) system in the following ways: During the installation process, using an installer graphical user interface (GUI) or text user interface (TUI) After installation, using the command-line interface (CLI) Automatically, during or after installation, using a kickstart script or an activation key. The specific steps to register your system depend on the version of RHEL that you are using and the registration method that you choose. Registering your system to Red Hat enables features and capabilities that you can use to manage your system and report data. For example, a registered system is authorized to access protected content repositories for subscribed products through the Red Hat Content Delivery Network (CDN) or a Red Hat Satellite Server. These content repositories contain Red Hat software packages and updates that are available only to customers with an active subscription. These packages and updates include security patches, bug fixes, and new features for RHEL and other Red Hat products. Important The entitlement-based subscription model is deprecated and will be retired in the future. Simple content access is now the default subscription model. It provides an improved subscription experience that eliminates the need to attach a subscription to a system before you can access Red Hat subscription content on that system. If your Red Hat account uses the entitlement-based subscription model, contact your Red hat account team, for example, a technical account manager (TAM) or solution architect (SA) to prepare for migration to simple content access. For more information, see Transition of subscription services to the hybrid cloud . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/the-value-of-registering-your-rhel-system-to-red-hat_rhel-installer |
Chapter 8. Container Images Based on Red Hat Software Collections 3.3 | Chapter 8. Container Images Based on Red Hat Software Collections 3.3 Component Description Supported architectures Database Images rhscl/mariadb-103-rhel7 MariaDB 10.3 SQL database server (EOL) x86_64, s390x, ppc64le rhscl/redis-5-rhel7 Redis 5 key-value store (EOL) x86_64, s390x, ppc64le Red Hat Developer Toolset Images rhscl/devtoolset-8-toolchain-rhel7 Red Hat Developer Toolset toolchain (EOL) x86_64, s390x, ppc64le rhscl/devtoolset-8-perftools-rhel7 Red Hat Developer Toolset perftools (EOL) x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.3, see the Red Hat Software Collections 3.3 Release Notes . For more information about the Red Hat Developer Toolset 8.1 components, see the Red Hat Developer Toolset 8 User Guide . For information regarding container images based on Red Hat Software Collections 2, see Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/RHSCL_3.3_images |
Chapter 16. Servers and Services | Chapter 16. Servers and Services squid rebased to version 3.5.20 Squid is a fully-featured HTTP proxy, which offers a rich access control, authorization and logging environment to develop web proxy and content serving applications. The squid packages have been upgraded to version 3.5.20. The most notable changes include: Support for libecap version 1.0 Authentication helper query extensions Support for named services Upgraded the squidclient utility Helper support for concurrency channels Native FTP Relay Receive PROXY protocol, versions 1 and 2 SSL server certificate validator Note directive for annotating transactions TPROXY support for BSD systems spoof_client_ip directive for managing TPROXY spoofing Various Access Control updates Support for the OK, ERR, and BH response codes and the kv-pair options from any helper Improved pipeline queue configuration. Multicast DNS IMPORTANT: Note that when updating squid , certain configuration directives will be changed to their more recent versions. These modifications are backward-compatible, but if you want to prevent unexpected configuration changes, you can use the squid-migration-script package to preview the results of updating your squid configuration. For further information, see https://access.redhat.com/solutions/2678941 . (BZ#1273942) PHP cURL module now supports TLS 1.1 and TLS 1.2 Support for the TLS protocol version 1.1 and 1.2, which was previously made available in the curl library, has been added to the PHP cURL extension. (BZ# 1291667 ) SCTP in OpenSSL is now supported The SCTP (Stream Control Transmission Protocol) support in the OpenSSL library is now enabled for the OpenSSL DTLS (Datagram Transport Layer Security) protocol implementation. (BZ#1225379) Dovecot has tcp_wrappers support enabled Dovecot is an IMAP server, primarily written with security in mind. It also contains a small POP3 server and supports e-mail in either the Maildir or Mbox format. In this update, Dovecot is built with tcp_wrappers support enabled. You can now limit network access to Dovecot using tcp_wrappers as an additional layer of security. (BZ# 1229164 ) Necessary classes added to allow log4j as Tomcat logging mechanism Due to missing tomcat-juli.jar and tomcat-juli-adapters.jar files, the log4j utility could not be used as Tomcat logging mechanism. The necessary classes have been added and log4j can now be used for logging. Also, the symlinks utility has to be installed or updated to point in extras folder with the described .jar files. (BZ# 1133070 ) MySQL-python rebased to version 1.2.5 The MySQL-python packages have been upgraded to upstream version 1.2.5, which provides a number of bug fixes and enhancements over the version. Notably, a bug causing ResourceClosedError in neutron and cinder services has been fixed. (BZ#1266849) BIND now supports GeoIP-based ACLs With this update, the BIND DNS server is able to use GeoIP databases. The feature enables administrators to implement client access control lists (ACL), based on client's geographical location. (BZ#1220594) The BIND server now supports CAA records Certification Authority Authorization (CAA) support has been added to the Berkeley Internet Name Domain (BIND) server. Users can now restrict Certification Authorities by specifying the DNS record. (BZ#1306610) The Unbound DNS validating resolver now supports ECDSA cipher for DNSSEC This update enables the ECDSA cipher in the Unbound DNS validating resolver. As a result, the DNS resolver is now able to validate DNS responses signed using DNSSEC with ECDSA algorithm. (BZ# 1245250 ) tomcat rebased to version 7.0.69 The tomcat packages have been rebased to version 7.0.69. Notable changes include: Resolved numerous bugs and vulnerabilities Added the HSTS and VersionLoggerListener features Resolved the NoSuchElementException bug outlined in BZ#1311622 (BZ# 1287928 ) servicelog rebased to version 1.1.14 The servicelog packages have been upgraded to upstream version 1.1.14, which provides a number of bug fixes and enhancements over the version. (BZ#1182028) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_servers_and_services |
Chapter 3. Managing Organizations | Chapter 3. Managing Organizations Organizations divide Red Hat Satellite resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through Red Hat Satellite, then divide and assign your Red Hat subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. Here are some examples of organization management: Single Organization A small business with a simple system administration chain. In this case, you can create a single organization for the business and assign content to it. Multiple Organizations A large company that owns several smaller business units. For example, a company with separate system administration and software development groups. In this case, you can create organizations for the company and each of the business units it owns. This keeps the system infrastructure for each separate. You can then assign content to each organization based on its needs. External Organizations A company that manages external systems for other organizations. For example, a company offering cloud computing and web hosting resources to customers. In this case, you can create an organization for the company's own system infrastructure and then an organization for each external business. You can then assign content to each organization where necessary. A default installation of Red Hat Satellite has a default organization called Default_Organization . New Users If a new user is not assigned a default organization, their access is limited. To grant systems rights to users, assign them to a default organization. The time the user logs on to Satellite, the user's account has the correct system rights. 3.1. Creating an Organization Use this procedure to create an organization. To use the CLI instead of the Satellite web UI, see CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Organizations . Click New Organization . In the Name field, enter a name for the organization. In the Label field, enter a unique identifier for the organization. This is used for creating and mapping certain assets, such as directories for content storage. Use letters, numbers, underscores, and dashes, but no spaces. Optional: in the Description field, enter a description for the organization. Click Submit . If you have hosts with no organization assigned, select the hosts that you want to add to the organization, then click Proceed to Edit . In the Edit page, assign the infrastructure resources that you want to add to the organization. This includes networking resources, installation media, kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Organizations and then selecting an organization to edit. Click Submit . CLI procedure To create an organization, enter the following command: Optional: To edit an organization, enter the hammer organization update command. For example, the following command assigns a compute resource to the organization: 3.2. Setting the Organization Context An organization context defines the organization to use for a host and its associated resources. Procedure The organization menu is the first menu item in the menu bar, on the upper left of the Satellite web UI. If you have not selected a current organization, the menu says Any Organization . Click the Any Organization button and select the organization to use. CLI procedure While using the CLI, include either --organization " My_Organization " or --organization-label " My_Organization_Label " as an option. For example: This command outputs subscriptions allocated for the My_Organization . 3.3. Creating an Organization Debug Certificate If you require a debug certificate for your organization, use the following procedure. Procedure In the Satellite web UI, navigate to Administer > Organizations . Select an organization that you want to generate a debug certificate for. Click Generate and Download . Save the certificate file in a secure location. Debug Certificates for Provisioning Templates Debug Certificates are automatically generated for provisioning template downloads if they do not already exist in the organization for which they are being downloaded. 3.4. Browsing Repository Content Using an Organization Debug Certificate You can view an organization's repository content using a web browser or using the API if you have a debug certificate for that organization. Prerequisites Create and download an organization certificate as described in Section 3.3, "Creating an Organization Debug Certificate" . Open the X.509 certificate, for example, for the default organization: Copy the contents of the file from -----BEGIN RSA PRIVATE KEY----- to -----END RSA PRIVATE KEY----- , into a key.pem file. Copy the contents of the file from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- , into a cert.pem file. Procedure To use a browser, you must first convert the X.509 certificate to a format your browser supports and then import the certificate. For Firefox Users To use an organization debug certificate in Firefox, complete the following steps: To create a PKCS12 format certificate, enter the following command: In the Firefox browser, navigate to Edit > Preferences > Advanced Tab . Select View Certificates , and click the Your Certificates tab. Click Import and select the .pfx file to load. Enter the following URL in the address bar to browse the accessible paths for all the repositories and check their contents: Pulp uses the organization label, therefore, you must enter the organization label into the URL. For CURL Users To use the organization debug certificate with CURL, enter the following command: Ensure that the paths to cert.pem and key.pem are the correct absolute paths otherwise the command fails silently. 3.5. Deleting an Organization You can delete an organization if the organization is not associated with any life cycle environments or host groups. If there are any life cycle environments or host groups associated with the organization you are about to delete, remove them by navigating to Administer > Organizations and clicking the relevant organization. Do not delete the default organization created during installation because the default organization is a placeholder for any unassociated hosts in the Satellite environment. There must be at least one organization in the environment at any given time. Procedure In the Satellite web UI, navigate to Administer > Organizations . From the list to the right of the name of the organization you want to delete, select Delete . Click OK to delete the organization. CLI procedure Enter the following command to retrieve the ID of the organization that you want to delete: From the output, note the ID of the organization that you want to delete. Enter the following command to delete an organization: | [
"hammer organization create --name \" My_Organization \" --label \" My_Organization_Label \" --description \" My_Organization_Description \"",
"hammer organization update --name \" My_Organization \" --compute-resource-ids 1",
"hammer subscription list --organization \" My_Organization \"",
"vi 'Default Organization-key-cert.pem'",
"openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out My_Organization_Label .pfx -name My_Organization",
"https:// satellite.example.com /pulp/content/",
"curl -k --cert cert.pem --key key.pem https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/content/dist/rhel/server/7/7Server/x86_64/os/",
"hammer organization list",
"hammer organization delete --id Organization_ID"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/managing_organizations_content-management |
Chapter 6. Backing up IdM servers using Ansible playbooks | Chapter 6. Backing up IdM servers using Ansible playbooks Using the ipabackup Ansible role, you can automate backing up an IdM server and transferring backup files between servers and your Ansible controller. 6.1. Preparing your Ansible control node for managing IdM As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. By following this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient , ipabackup , ipasmartcard_server and ipasmartcard_client ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: You must enter the IdM admin password when you enter these commands. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory 6.2. Using Ansible to create a backup of an IdM server You can use the ipabackup role in an Ansible playbook to create a backup of an IdM server and store it on the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the backup-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the backup-my-server.yml Ansible playbook file for editing. Adapt the file by setting the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group: Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Verification Log into the IdM server that you have backed up. Verify that the backup is in the /var/lib/ipa/backup directory. Additional resources For more sample Ansible playbooks that use the ipabackup role, see: The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.3. Using Ansible to create a backup of an IdM server on your Ansible controller You can use the ipabackup role in an Ansible playbook to create a backup of an IdM server and automatically transfer it on your Ansible controller. Your backup file name begins with the host name of the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure To store the backups, create a subdirectory in your home directory on the Ansible controller. Navigate to the ~/MyPlaybooks/ directory: Make a copy of the backup-server-to-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the backup-my-server-to-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Optional: To maintain a copy of the backup on the IdM server, uncomment the following line: By default, backups are stored in the present working directory of the Ansible controller. To specify the backup directory you created in Step 1, add the ipabackup_controller_path variable and set it to the /home/user/ipabackups directory. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Verification Verify that the backup is in the /home/user/ipabackups directory of your Ansible controller: Additional resources For more sample Ansible playbooks that use the ipabackup role, see: The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.4. Using Ansible to copy a backup of an IdM server to your Ansible controller You can use an Ansible playbook to copy a backup of an IdM server from the IdM server to your Ansible controller. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure To store the backups, create a subdirectory in your home directory on the Ansible controller. Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-server-to-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your IdM server to copy to your Ansible controller. By default, backups are stored in the present working directory of the Ansible controller. To specify the directory you created in Step 1, add the ipabackup_controller_path variable and set it to the /home/user/ipabackups directory. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To copy all IdM backups to your controller, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the copy-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Verification Verify your backup is in the /home/user/ipabackups directory on your Ansible controller: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.5. Using Ansible to copy a backup of an IdM server from your Ansible controller to the IdM server You can use an Ansible playbook to copy a backup of an IdM server from your Ansible controller to the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-controller-to-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your Ansible controller to copy to the IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.6. Using Ansible to remove a backup from an IdM server You can use an Ansible playbook to remove a backup from an IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the remove-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the remove-backup-from-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup to remove from your IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To remove all IdM backups from the IdM server, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the remove-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. | [
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/backup-server.yml backup-my-server.yml",
"--- - name: Playbook to backup IPA server hosts: ipaserver become: true roles: - role: ipabackup state: present",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory backup-my-server.yml",
"ls /var/lib/ipa/backup/ ipa-full-2021-04-30-13-12-00",
"mkdir ~/ipabackups",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/backup-server-to-controller.yml backup-my-server-to-my-controller.yml",
"ipabackup_keep_on_server: true",
"--- - name: Playbook to backup IPA server to controller hosts: ipaserver become: true vars: ipabackup_to_controller: true # ipabackup_keep_on_server: true ipabackup_controller_path: /home/user/ipabackups roles: - role: ipabackup state: present",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory backup-my-server-to-my-controller.yml",
"[user@controller ~]USD ls /home/user/ipabackups server.idm.example.com_ipa-full-2021-04-30-13-12-00",
"mkdir ~/ipabackups",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-server.yml copy-backup-from-my-server-to-my-controller.yml",
"--- - name: Playbook to copy backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 ipabackup_to_controller: true ipabackup_controller_path: /home/user/ipabackups roles: - role: ipabackup state: present",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-server-to-my-controller.yml",
"vars: ipabackup_name: all ipabackup_to_controller: true",
"[user@controller ~]USD ls /home/user/ipabackups server.idm.example.com_ipa-full-2021-04-30-13-12-00",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-controller.yml copy-backup-from-my-controller-to-my-server.yml",
"--- - name: Playbook to copy a backup from controller to the IPA server hosts: ipaserver become: true vars: ipabackup_name: server.idm.example.com_ipa-full-2021-04-30-13-12-00 ipabackup_from_controller: true roles: - role: ipabackup state: copied",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-controller-to-my-server.yml",
"cd ~/MyPlaybooks/",
"cp /usr/share/doc/ansible-freeipa/playbooks/remove-backup-from-server.yml remove-backup-from-my-server.yml",
"--- - name: Playbook to remove backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 roles: - role: ipabackup state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory remove-backup-from-my-server.yml",
"vars: ipabackup_name: all"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/assembly_backing-up-idm-servers-using-ansible-playbooks_preparing-for-disaster-recovery |
Chapter 2. Creating and managing TLS keys and certificates | Chapter 2. Creating and managing TLS keys and certificates You can encrypt communication transmitted between two systems by using the TLS (Transport Layer Security) protocol. This standard uses asymmetric cryptography with private and public keys, digital signatures, and certificates. 2.1. TLS certificates TLS (Transport Layer Security) is a protocol that enables client-server applications to pass information securely. TLS uses a system of public and private key pairs to encrypt communication transmitted between clients and servers. TLS is the successor protocol to SSL (Secure Sockets Layer). TLS uses X.509 certificates to bind identities, such as hostnames or organizations, to public keys using digital signatures. X.509 is a standard that defines the format of public key certificates. Authentication of a secure application depends on the integrity of the public key value in the application's certificate. If an attacker replaces the public key with its own public key, it can impersonate the true application and gain access to secure data. To prevent this type of attack, all certificates must be signed by a certification authority (CA). A CA is a trusted node that confirms the integrity of the public key value in a certificate. A CA signs a public key by adding its digital signature and issues a certificate. A digital signature is a message encoded with the CA's private key. The CA's public key is made available to applications by distributing the certificate of the CA. Applications verify that certificates are validly signed by decoding the CA's digital signature with the CA's public key. To have a certificate signed by a CA, you must generate a public key, and send it to a CA for signing. This is referred to as a certificate signing request (CSR). A CSR contains also a distinguished name (DN) for the certificate. The DN information that you can provide for either type of certificate can include a two-letter country code for your country, a full name of your state or province, your city or town, a name of your organization, your email address, and it can also be empty. Many current commercial CAs prefer the Subject Alternative Name extension and ignore DNs in CSRs. RHEL provides two main toolkits for working with TLS certificates: GnuTLS and OpenSSL. You can create, read, sign, and verify certificates using the openssl utility from the openssl package. The certtool utility provided by the gnutls-utils package can do the same operations using a different syntax and above all a different set of libraries in the back end. Additional resources RFC 5280: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile openssl(1) , x509(1) , ca(1) , req(1) , and certtool(1) man pages on your system 2.2. Creating a private CA using OpenSSL Private certificate authorities (CA) are useful when your scenario requires verifying entities within your internal network. For example, use a private CA when you create a VPN gateway with authentication based on certificates signed by a CA under your control or when you do not want to pay a commercial CA. To sign certificates in such use cases, the private CA uses a self-signed certificate. Prerequisites You have root privileges or permissions to enter administrative commands with sudo . Commands that require such privileges are marked with # . Procedure Generate a private key for your CA. For example, the following command creates a 256-bit Elliptic Curve Digital Signature Algorithm (ECDSA) key: The time for the key-generation process depends on the hardware and entropy of the host, the selected algorithm, and the length of the key. Create a certificate signed using the private key generated in the command: The generated ca.crt file is a self-signed CA certificate that you can use to sign other certificates for ten years. In the case of a private CA, you can replace <Example CA> with any string as the common name (CN). Set secure permissions on the private key of your CA, for example: steps To use a self-signed CA certificate as a trust anchor on client systems, copy the CA certificate to the client and add it to the clients' system-wide truststore as root : See Chapter 3, Using shared system certificates for more information. Verification Create a certificate signing request (CSR), and use your CA to sign the request. The CA must successfully create a certificate based on the CSR, for example: See Section 2.5, "Using a private CA to issue certificates for CSRs with OpenSSL" for more information. Display the basic information about your self-signed CA: Verify the consistency of the private key: Additional resources openssl(1) , ca(1) , genpkey(1) , x509(1) , and req(1) man pages on your system 2.3. Creating a private key and a CSR for a TLS server certificate using OpenSSL You can use TLS-encrypted communication channels only if you have a valid TLS certificate from a certificate authority (CA). To obtain the certificate, you must create a private key and a certificate signing request (CSR) for your server first. Procedure Generate a private key on your server system, for example: Optional: Use a text editor of your choice to prepare a configuration file that simplifies creating your CSR, for example: The extendedKeyUsage = serverAuth option limits the use of a certificate. Create a CSR using the private key you created previously: If you omit the -config option, the req utility prompts you for additional information, for example: steps Submit the CSR to a CA of your choice for signing. Alternatively, for an internal use scenario within a trusted network, use your private CA for signing. See Section 2.5, "Using a private CA to issue certificates for CSRs with OpenSSL" for more information. Verification After you obtain the requested certificate from the CA, check that the human-readable parts of the certificate match your requirements, for example: Additional resources openssl(1) , x509(1) , genpkey(1) , req(1) , and config(5) man pages on your system 2.4. Creating a private key and a CSR for a TLS client certificate using OpenSSL You can use TLS-encrypted communication channels only if you have a valid TLS certificate from a certificate authority (CA). To obtain the certificate, you must create a private key and a certificate signing request (CSR) for your client first. Procedure Generate a private key on your client system, for example: Optional: Use a text editor of your choice to prepare a configuration file that simplifies creating your CSR, for example: The extendedKeyUsage = clientAuth option limits the use of a certificate. Create a CSR using the private key you created previously: If you omit the -config option, the req utility prompts you for additional information, for example: steps Submit the CSR to a CA of your choice for signing. Alternatively, for an internal use scenario within a trusted network, use your private CA for signing. See Section 2.5, "Using a private CA to issue certificates for CSRs with OpenSSL" for more information. Verification Check that the human-readable parts of the certificate match your requirements, for example: Additional resources openssl(1) , x509(1) , genpkey(1) , req(1) , and config(5) man pages on your system 2.5. Using a private CA to issue certificates for CSRs with OpenSSL To enable systems to establish a TLS-encrypted communication channel, a certificate authority (CA) must provide valid certificates to them. If you have a private CA, you can create the requested certificates by signing certificate signing requests (CSRs) from the systems. Prerequisites You have already configured a private CA. See Section 2.2, "Creating a private CA using OpenSSL" for more information. You have a file containing a CSR. You can find an example of creating the CSR in Section 2.3, "Creating a private key and a CSR for a TLS server certificate using OpenSSL" . Procedure Optional: Use a text editor of your choice to prepare an OpenSSL configuration file for adding extensions to certificates, for example: Note that the example illustrates only the principle and openssl does not add all extensions to the certificate automatically. You must add the extensions you require either to the CNF file or append them to parameters of the openssl command. Use the x509 utility to create a certificate based on a CSR, for example: To increase security, delete the serial-number file before you create another certificate from a CSR. This way, you ensure that the serial number is always random. If you omit the CAserial option for specifying a custom file name, the serial-number file name is the same as the file name of the certificate, but its extension is replaced with the .srl extension ( server-cert.srl in the example). Additional resources openssl(1) , ca(1) , and x509(1) man pages on your system 2.6. Creating a private CA using GnuTLS Private certificate authorities (CA) are useful when your scenario requires verifying entities within your internal network. For example, use a private CA when you create a VPN gateway with authentication based on certificates signed by a CA under your control or when you do not want to pay a commercial CA. To sign certificates in such use cases, the private CA uses a self-signed certificate. Prerequisites You have root privileges or permissions to enter administrative commands with sudo . Commands that require such privileges are marked with # . You have already installed GnuTLS on your system. If you did not, you can use this command: Procedure Generate a private key for your CA. For example, the following command creates a 256-bit ECDSA (Elliptic Curve Digital Signature Algorithm) key: The time for the key-generation process depends on the hardware and entropy of the host, the selected algorithm, and the length of the key. Create a template file for a certificate. Create a file with a text editor of your choice, for example: Edit the file to include the necessary certification details: Create a certificate signed using the private key generated in step 1: The generated <ca.crt> file is a self-signed CA certificate that you can use to sign other certificates for one year. <ca.crt> file is the public key (certificate). The loaded file <ca.key> is the private key. You should keep this file in safe location. Set secure permissions on the private key of your CA, for example: steps To use a self-signed CA certificate as a trust anchor on client systems, copy the CA certificate to the client and add it to the clients' system-wide truststore as root : See Chapter 3, Using shared system certificates for more information. Verification Display the basic information about your self-signed CA: Create a certificate signing request (CSR), and use your CA to sign the request. The CA must successfully create a certificate based on the CSR, for example: Generate a private key for your CA: Open a new configuration file in a text editor of your choice, for example: Edit the file to include the necessary certification details: Generate a request with the previously created private key: Generate the certificate and sign it with the private key of the CA: Additional resources certtool(1) and trust(1) man pages on your system 2.7. Creating a private key and a CSR for a TLS server certificate using GnuTLS To obtain the certificate, you must create a private key and a certificate signing request (CSR) for your server first. Procedure Generate a private key on your server system, for example: Optional: Use a text editor of your choice to prepare a configuration file that simplifies creating your CSR, for example: Create a CSR using the private key you created previously: If you omit the --template option, the certool utility prompts you for additional information, for example: steps Submit the CSR to a CA of your choice for signing. Alternatively, for an internal use scenario within a trusted network, use your private CA for signing. See Section 2.9, "Using a private CA to issue certificates for CSRs with GnuTLS" for more information. Verification After you obtain the requested certificate from the CA, check that the human-readable parts of the certificate match your requirements, for example: Additional resources certtool(1) man page on your system 2.8. Creating a private key and a CSR for a TLS client certificate using GnuTLS To obtain the certificate, you must create a private key and a certificate signing request (CSR) for your client first. Procedure Generate a private key on your client system, for example: Optional: Use a text editor of your choice to prepare a configuration file that simplifies creating your CSR, for example: Create a CSR using the private key you created previously: If you omit the --template option, the certtool utility prompts you for additional information, for example: steps Submit the CSR to a CA of your choice for signing. Alternatively, for an internal use scenario within a trusted network, use your private CA for signing. See Section 2.9, "Using a private CA to issue certificates for CSRs with GnuTLS" for more information. Verification Check that the human-readable parts of the certificate match your requirements, for example: Additional resources certtool(1) man page on your system 2.9. Using a private CA to issue certificates for CSRs with GnuTLS To enable systems to establish a TLS-encrypted communication channel, a certificate authority (CA) must provide valid certificates to them. If you have a private CA, you can create the requested certificates by signing certificate signing requests (CSRs) from the systems. Prerequisites You have already configured a private CA. See Section 2.6, "Creating a private CA using GnuTLS" for more information. You have a file containing a CSR. You can find an example of creating the CSR in Section 2.7, "Creating a private key and a CSR for a TLS server certificate using GnuTLS" . Procedure Optional: Use a text editor of your choice to prepare an GnuTLS configuration file for adding extensions to certificates, for example: Use the certtool utility to create a certificate based on a CSR, for example: Additional resources certtool(1) man page on your system | [
"openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out <ca.key>",
"openssl req -key <ca.key> -new -x509 -days 3650 -addext keyUsage=critical,keyCertSign,cRLSign -subj \"/CN= <Example CA> \" -out <ca.crt>",
"chown <root> : <root> <ca.key> chmod 600 <ca.key>",
"trust anchor <ca.crt>",
"openssl x509 -req -in <client-cert.csr> -CA <ca.crt> -CAkey <ca.key> -CAcreateserial -days 365 -extfile <openssl.cnf> -extensions <client-cert> -out <client-cert.crt> Signature ok subject=C = US, O = Example Organization, CN = server.example.com Getting CA Private Key",
"openssl x509 -in <ca.crt> -text -noout Certificate: ... X509v3 extensions: ... X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Certificate Sign, CRL Sign ...",
"openssl pkey -check -in <ca.key> Key is valid -----BEGIN PRIVATE KEY----- MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgcagSaTEBn74xZAwO 18wRpXoCVC9vcPki7WlT+gnmCI+hRANCAARb9NxIvkaVjFhOoZbGp/HtIQxbM78E lwbDP0BI624xBJ8gK68ogSaq2x4SdezFdV1gNeKScDcU+Pj2pELldmdF -----END PRIVATE KEY-----",
"openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out <server-private.key>",
"vim <example_server.cnf> [server-cert] keyUsage = critical, digitalSignature, keyEncipherment, keyAgreement extendedKeyUsage = serverAuth subjectAltName = @alt_name [req] distinguished_name = dn prompt = no [dn] C = <US> O = <Example Organization> CN = <server.example.com> [alt_name] DNS.1 = <example.com> DNS.2 = <server.example.com> IP.1 = <192.168.0.1> IP.2 = <::1> IP.3 = <127.0.0.1>",
"openssl req -key <server-private.key> -config <example_server.cnf> -new -out <server-cert.csr>",
"You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: <US> State or Province Name (full name) []: <Washington> Locality Name (eg, city) [Default City]: <Seattle> Organization Name (eg, company) [Default Company Ltd]: <Example Organization> Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: <server.example.com> Email Address []: <[email protected]>",
"openssl x509 -text -noout -in <server-cert.crt> Certificate: ... Issuer: CN = Example CA Validity Not Before: Feb 2 20:27:29 2023 GMT Not After : Feb 2 20:27:29 2024 GMT Subject: C = US, O = Example Organization, CN = server.example.com Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (256 bit) ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment, Key Agreement X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Subject Alternative Name: DNS:example.com, DNS:server.example.com, IP Address:192.168.0.1, IP ...",
"openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:P-256 -out <client-private.key>",
"vim <example_client.cnf> [client-cert] keyUsage = critical, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth subjectAltName = @alt_name [req] distinguished_name = dn prompt = no [dn] CN = <client.example.com> [clnt_alt_name] email= <[email protected]>",
"openssl req -key <client-private.key> -config <example_client.cnf> -new -out <client-cert.csr>",
"You are about to be asked to enter information that will be incorporated into your certificate request. ... Common Name (eg, your name or your server's hostname) []: <client.example.com> Email Address []: <[email protected]>",
"openssl x509 -text -noout -in <client-cert.crt> Certificate: ... X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Subject Alternative Name: email:[email protected] ...",
"vim <openssl.cnf> [server-cert] extendedKeyUsage = serverAuth [client-cert] extendedKeyUsage = clientAuth",
"openssl x509 -req -in <server-cert.csr> -CA <ca.crt> -CAkey <ca.key> -CAcreateserial -days 365 -extfile <openssl.cnf> -extensions <server-cert> -out <server-cert.crt> Signature ok subject=C = US, O = Example Organization, CN = server.example.com Getting CA Private Key",
"yum install gnutls-utils",
"certtool --generate-privkey --sec-param High --key-type=ecdsa --outfile <ca.key>",
"vi <ca.cfg>",
"organization = \"Example Inc.\" state = \"Example\" country = EX cn = \"Example CA\" serial = 007 expiration_days = 365 ca cert_signing_key crl_signing_key",
"certtool --generate-self-signed --load-privkey <ca.key> --template <ca.cfg> --outfile <ca.crt >",
"chown <root> : <root> <ca.key> chmod 600 <ca.key>",
"trust anchor <ca.crt>",
"certtool --certificate-info --infile <ca.crt> Certificate: ... X509v3 extensions: ... X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Certificate Sign, CRL Sign",
"certtool --generate-privkey --outfile <example-server.key>",
"vi <example-server.cfg>",
"signing_key encryption_key key_agreement tls_www_server country = \"US\" organization = \"Example Organization\" cn = \"server.example.com\" dns_name = \"example.com\" dns_name = \"server.example.com\" ip_address = \"192.168.0.1\" ip_address = \"::1\" ip_address = \"127.0.0.1\"",
"certtool --generate-request --load-privkey <example-server.key> --template <example-server.cfg> --outfile <example-server.crq>",
"certtool --generate-certificate --load-request <example-server.crq> --load-ca-certificate <ca.crt> --load-ca-privkey <ca.key> --outfile <example-server.crt>",
"certtool --generate-privkey --sec-param High --outfile <example-server.key>",
"vim <example_server.cnf> signing_key encryption_key key_agreement tls_www_server country = \"US\" organization = \"Example Organization\" cn = \"server.example.com\" dns_name = \"example.com\" dns_name = \"server.example.com\" ip_address = \"192.168.0.1\" ip_address = \"::1\" ip_address = \"127.0.0.1\"",
"certtool --generate-request --template <example-server.cfg> --load-privkey <example-server.key> --outfile <example-server.crq>",
"You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Generating a PKCS #10 certificate request Country name (2 chars): <US> State or province name: <Washington> Locality name: <Seattle> Organization name: <Example Organization> Organizational unit name: Common name: <server.example.com>",
"certtool --certificate-info --infile <example-server.crt> Certificate: ... Issuer: CN = Example CA Validity Not Before: Feb 2 20:27:29 2023 GMT Not After : Feb 2 20:27:29 2024 GMT Subject: C = US, O = Example Organization, CN = server.example.com Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (256 bit) ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment, Key Agreement X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Subject Alternative Name: DNS:example.com, DNS:server.example.com, IP Address:192.168.0.1, IP ...",
"certtool --generate-privkey --sec-param High --outfile <example-client.key>",
"vim <example_client.cnf> signing_key encryption_key tls_www_client cn = \"client.example.com\" email = \"[email protected]\"",
"certtool --generate-request --template <example-client.cfg> --load-privkey <example-client.key> --outfile <example-client.crq>",
"Generating a PKCS #10 certificate request Country name (2 chars): <US> State or province name: <Washington> Locality name: <Seattle> Organization name: <Example Organization> Organizational unit name: Common name: <server.example.com>",
"certtool --certificate-info --infile <example-client.crt> Certificate: ... X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Subject Alternative Name: email:[email protected] ...",
"vi <server-extensions.cfg> honor_crq_extensions ocsp_uri = \"http://ocsp.example.com\"",
"certtool --generate-certificate --load-request <example-server.crq> --load-ca-privkey <ca.key> --load-ca-certificate <ca.crt> --template <server-extensions.cfg> --outfile <example-server.crt>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/securing_networks/creating-and-managing-tls-keys-and-certificates_securing-networks |
Chapter 18. Logging | Chapter 18. Logging AMQ Broker uses the JBoss Logging framework to do its logging and is configurable via the <broker_instance_dir> /etc/logging.properties configuration file. This configuration file is a list of key-value pairs. You specify loggers in your broker configuration by including them in the loggers key of the logging.properties configuration file, as shown below. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message,org.apache.activemq.audit.resource The loggers available in AMQ Broker are shown in the following table. Logger Description org.jboss.logging The root logger. Logs any calls not handled by the other broker loggers. org.apache.activemq.artemis.core.server Logs the broker core org.apache.activemq.artemis.utils Logs utility calls org.apache.activemq.artemis.journal Logs Journal calls org.apache.activemq.artemis.jms Logs JMS calls org.apache.activemq.artemis.integration.bootstrap Logs bootstrap calls org.apache.activemq.audit.base Logs access to all JMX object methods org.apache.activemq.audit.message Logs message operations such as production, consumption, and browsing of messages org.apache.activemq.audit.resource Logs authentication events, creation or deletion of broker resources from JMX or the AMQ Broker management console, and browsing of messages in the management console There are also two default logging handlers configured in the logger.handlers key, as shown below. logger.handlers=FILE,CONSOLE logger.handlers=FILE The logger outputs log entries to a file. logger.handlers=CONSOLE The logger outputs log entries to the AMQ Broker management console. 18.1. Changing the logging level The default logging level for all loggers is INFO and is configured on the root logger, as shown below. logger.level=INFO You can configure the logging level for all other loggers individually, as shown below. logger.org.apache.activemq.artemis.core.server.level=INFO logger.org.apache.activemq.artemis.journal.level=INFO logger.org.apache.activemq.artemis.utils.level=INFO logger.org.apache.activemq.artemis.jms.level=INFO logger.org.apache.activemq.artemis.integration.bootstrap.level=INFO logger.org.apache.activemq.audit.base.level=INFO logger.org.apache.activemq.audit.message.level=INFO logger.org.apache.activemq.audit.resource.level=INFO The available logging levels are described in the following table. The logging levels are listed in ascending order, from the least detailed to the most. Level Description FATAL Use the FATAL logging level for events that indicate a critical service failure. If a service issues a FATAL error, it is completely unable to execute requests of any kind. ERROR Use the ERROR logging level for events that indicate a disruption in a request or the ability to service a request. A service should have some capacity to continue to service requests in the presence of this level of error. WARN Use the WARN logging level for events that might indicate a non-critical service error. Resumable errors, or minor breaches in request expectations meet this description. The distinction between WARN and ERROR is one for an application developer to make. A simple criterion for making this distinction is whether the error would require a user to seek technical support. If an error would require technical support, set the logging level to ERROR. Otherwise, set the level to WARN. INFO Use the INFO logging level for service lifecycle events and other crucial related information. INFO-level messages for a given service category should clearly indicate what state the service is in. DEBUG Use the DEBUG logging level for log messages that convey extra information for lifecycle events. Use this logging level for developer-oriented information or in-depth information required for technical support. When the DEBUG logging level is enabled, the JBoss server log should not grow proportionally with the number of server requests. DEBUG- and INFO-level messages for a given service category should clearly indicate what state the service is in, as well as what broker resources it is using; ports, interfaces, log files, and so on. TRACE Use the TRACE logging level for log messages that are directly associated with request activity. Such messages should not be submitted to a logger unless the logger category priority threshold indicates that the message will be rendered. Use the Logger.isTraceEnabled() method to determine whether the category priority threshold is enabled. TRACE-level logging enables deep probing of the broker behavior when necessary. When TRACE logging level is enabled, the number of messages in the JBoss sever log grows to at least a * N , where N is the number of requests received by the broker, and a is some constant. The server log might grow to some power of N , depending on the request-handling layer being traced. Note INFO is the only available logging level for the logger.org.apache.activemq.audit.base , logger.org.apache.activemq.audit.message , and logger.org.apache.activemq.audit.resource audit loggers. The logging level specified for the root logger determines the most detailed logging level for all loggers, even if other loggers have more detailed logging levels specified in their configurations. For example, suppose org.apache.activemq.artemis.utils has a specified logging of DEBUG , while the root logger, org.jboss.logging , has a specified logging level of WARN . In this case, both loggers use a logging level of WARN . 18.2. Enabling audit logging Three audit loggers are available for you to enable; a base audit logger, a message audit logger, and a resource audit logger. Base audit logger (org.apache.activemq.audit.base) Logs access to all JMX object methods, such as creation and deletion of addresses and queues. The log does not indicate whether these operations succeeded or failed. Message audit logger (org.apache.activemq.audit.message) Logs message-related broker operations, such as production, consumption, or browsing of messages. Resource audit logger (org.apache.activemq.audit.resource) Logs authentication success or failure from clients, routes, and the AMQ Broker management console. Also logs creation, update, or deletion of queues from either JMX or the management console, and browsing of messages in the management console. You can enable each audit logger independently of the others. By default, each audit logger is disabled (that is, the logging level is set to ERROR , which is a not a valid logging level for the audit loggers). To enable one of the audit loggers, set the logging level to INFO . For example: logger.org.apache.activemq.audit.base.level=INFO Important The message audit logger runs on a performance-intensive path on the broker. Enabling the logger might negatively affect the performance of the broker, particularly if the broker is running under a high messaging load. Use of the message audit logger is not recommended on messaging systems where high throughput is required. 18.3. Configuring console logging You can configure console logging using the following keys. handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler handler.CONSOLE.properties=autoFlush handler.CONSOLE.level=DEBUG handler.CONSOLE.autoFlush=true handler.CONSOLE.formatter=PATTERN Note handler.CONSOLE refers to the name given in the logger.handlers key. The console logging configuration options are described in the following table. Property Description name Handler name encoding Character encoding used by the handler level Logging level, specifying the message levels logged. Message levels lower than this value are discarded. formatter Defines a formatter. See Section 18.5, "Configuring the logging format" . autoflush Species whether to automatically flush the log after each write target Target of the console handler. The value can either be SYSTEM_OUT or SYSTEM_ERR. 18.4. Configuring file logging You can configure file logging using the following keys. handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler handler.FILE.level=DEBUG handler.FILE.properties=suffix,append,autoFlush,fileName handler.FILE.suffix=.yyyy-MM-dd handler.FILE.append=true handler.FILE.autoFlush=true handler.FILE.fileName=USD{artemis.instance}/log/artemis.log handler.FILE.formatter=PATTERN Note handler.FILE refers to the name given in the logger.handlers key. The file logging configuration options are described in the following table. Property Description name Handler name encoding Character encoding used by the handler level Logging level, specifying the message levels logged. Message levels lower than this value are discarded. formatter Defines a formatter. See Section 18.5, "Configuring the logging format" . autoflush Species whether to automatically flush the log after each write append Specifies whether to append to the target file file File description, consisting of the path and optional relative to path. 18.5. Configuring the logging format The formatter describes how log messages should be displayed. The following is the default configuration. formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter formatter.PATTERN.properties=pattern formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n In the preceding configuration, %s is the message and %E is the exception, if one exists. The format is the same as the Log4J format. A full description can be found here . 18.6. Client or embedded server logging If you want to enable logging on a client, you need to include the JBoss logging JARs in your client's class path. If you are using Maven, add the following dependencies: <dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>jboss-logmanager</artifactId> <version>1.5.3.Final</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>artemis-core-client</artifactId> <version>1.0.0.Final</version> </dependency> There are two properties that you need to set when starting your Java program. The first is to set the Log Manager to use the JBoss Log Manager. This is done by setting the `-Djava.util.logging.manager`property. For example: The second is to set the location of the logging.properties file to use. This is done by setting the -Dlogging.configuration property with a valid URL. For example: The following is a typical logging.properties file for a client: # Root logger option loggers=org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms,org.apache.activemq.artemis.ra # Root logger level logger.level=INFO # ActiveMQ Artemis logger levels logger.org.apache.activemq.artemis.core.server.level=INFO logger.org.apache.activemq.artemis.utils.level=INFO logger.org.apache.activemq.artemis.jms.level=DEBUG # Root logger handlers logger.handlers=FILE,CONSOLE # Console handler configuration handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler handler.CONSOLE.properties=autoFlush handler.CONSOLE.level=FINE handler.CONSOLE.autoFlush=true handler.CONSOLE.formatter=PATTERN # File handler configuration handler.FILE=org.jboss.logmanager.handlers.FileHandler handler.FILE.level=FINE handler.FILE.properties=autoFlush,fileName handler.FILE.autoFlush=true handler.FILE.fileName=activemq.log handler.FILE.formatter=PATTERN # Formatter pattern configuration formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter formatter.PATTERN.properties=pattern formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n 18.7. AMQ Broker plugin support AMQ supports custom plugins. You can use plugins to log information about many different types of events that would otherwise only be available through debug logs. Multiple plugins can be registered, tied, and executed together. The plugins will be executed based on the order of the registration, that is, the first plugin registered is always executed first. You can create custom plugins and implement them using the ActiveMQServerPlugin interface. This interface ensures that the plugin is on the classpath, and is registered with the broker. As all the interface methods are implemented by default, you have to add only the required behavior that needs to be implemented. 18.7.1. Adding plugins to the class path Add the custom created broker plugins to the broker runtime by adding the relevant .jar files to the <broker_instance_dir> /lib directory. If you are using an embedded system then place the .jar file under the regular class path of your embedded application. 18.7.2. Registering a plugin You must register a plugin by adding the broker-plugins element in the broker.xml configuration file. You can specify the plugin configuration value using the property child elements. These properties will be read and passed into the plugin's init (Map<String, String>) operation after the plugin has been instantiated. 18.7.3. Registering a plugin programmatically To register a plugin programmatically, use the registerBrokerPlugin() method and pass in a new instance of your plugin. The example below shows the registration of the UserPlugin plugin: 18.7.4. Logging specific events By default, AMQ broker provides the LoggingActiveMQServerPlugin plugin to log specific broker events. The LoggingActiveMQServerplugin plugin is commented-out by default and does not log any information. The following table describes each plugin property. Set a configuration property value to true to log events. Property Description LOG_CONNECTION_EVENTS Logs information when a connection is created or destroyed. LOG_SESSION_EVENTS Logs information when a session is created or closed. LOG_CONSUMER_EVENTS Logs information when a consumer is created or closed. LOG_DELIVERING_EVENTS Logs information when message is delivered to a consumer and when a message is acknowledged by a consumer. LOG_SENDING_EVENTS Logs information when a message has been sent to an address and when a message has been routed within the broker. LOG_INTERNAL_EVENTS Logs information when a queue created or destroyed, when a message is expired, when a bridge is deployed, and when a critical failure occurs. LOG_ALL_EVENTS Logs information for all the above events. To configure the LoggingActiveMQServerPlugin plugin to log connection events, uncomment the <broker-plugins> section in the broker.xml configuration file. The value of all the events is set to true in the commented default example. When you have changed the configuration parameters inside the <broker-plugins> section, you must restart the broker to reload the configuration updates. These configuration changes are not reloaded based on the configuration-file-refresh-period setting. When the log level is set to INFO , an entry is logged after the event has occurred. If the log level is set to DEBUG , log entries are generated for both before and after the event, for example, beforeCreateConsumer() and afterCreateConsumer() . If the log Level is set to DEBUG , the logger logs more information for a notification, when available. | [
"loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message,org.apache.activemq.audit.resource",
"logger.handlers=FILE,CONSOLE",
"logger.level=INFO",
"logger.org.apache.activemq.artemis.core.server.level=INFO logger.org.apache.activemq.artemis.journal.level=INFO logger.org.apache.activemq.artemis.utils.level=INFO logger.org.apache.activemq.artemis.jms.level=INFO logger.org.apache.activemq.artemis.integration.bootstrap.level=INFO logger.org.apache.activemq.audit.base.level=INFO logger.org.apache.activemq.audit.message.level=INFO logger.org.apache.activemq.audit.resource.level=INFO",
"logger.org.apache.activemq.audit.base.level=INFO",
"handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler handler.CONSOLE.properties=autoFlush handler.CONSOLE.level=DEBUG handler.CONSOLE.autoFlush=true handler.CONSOLE.formatter=PATTERN",
"handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler handler.FILE.level=DEBUG handler.FILE.properties=suffix,append,autoFlush,fileName handler.FILE.suffix=.yyyy-MM-dd handler.FILE.append=true handler.FILE.autoFlush=true handler.FILE.fileName=USD{artemis.instance}/log/artemis.log handler.FILE.formatter=PATTERN",
"formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter formatter.PATTERN.properties=pattern formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n",
"<dependency> <groupId>org.jboss.logmanager</groupId> <artifactId>jboss-logmanager</artifactId> <version>1.5.3.Final</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>artemis-core-client</artifactId> <version>1.0.0.Final</version> </dependency>",
"-Djava.util.logging.manager=org.jboss.logmanager.LogManager",
"-Dlogging.configuration=file:///home/user/projects/myProject/logging.properties",
"Root logger option loggers=org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms,org.apache.activemq.artemis.ra Root logger level logger.level=INFO ActiveMQ Artemis logger levels logger.org.apache.activemq.artemis.core.server.level=INFO logger.org.apache.activemq.artemis.utils.level=INFO logger.org.apache.activemq.artemis.jms.level=DEBUG Root logger handlers logger.handlers=FILE,CONSOLE Console handler configuration handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler handler.CONSOLE.properties=autoFlush handler.CONSOLE.level=FINE handler.CONSOLE.autoFlush=true handler.CONSOLE.formatter=PATTERN File handler configuration handler.FILE=org.jboss.logmanager.handlers.FileHandler handler.FILE.level=FINE handler.FILE.properties=autoFlush,fileName handler.FILE.autoFlush=true handler.FILE.fileName=activemq.log handler.FILE.formatter=PATTERN Formatter pattern configuration formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter formatter.PATTERN.properties=pattern formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n",
"<broker-plugins> <broker-plugin class-name=\" some.plugin.UserPlugin \"> <property key=\" property1 \" value=\" val_1 \" /> <property key=\" property2 \" value=\" val_2 \" /> </broker-plugin> </broker-plugins>",
"Configuration config = new ConfigurationImpl(); config.registerBrokerPlugin(new UserPlugin ());",
"<configuration ...> <!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin plugin to log in events --> <broker-plugins> <broker-plugin class-name=\"org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin\"> <property key=\"LOG_ALL_EVENTS\" value=\"true\"/> <property key=\"LOG_CONNECTION_EVENTS\" value=\"true\"/> <property key=\"LOG_SESSION_EVENTS\" value=\"true\"/> <property key=\"LOG_CONSUMER_EVENTS\" value=\"true\"/> <property key=\"LOG_DELIVERING_EVENTS\" value=\"true\"/> <property key=\"LOG_SENDING_EVENTS\" value=\"true\"/> <property key=\"LOG_INTERNAL_EVENTS\" value=\"true\"/> </broker-plugin> </broker-plugins> </configuration>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/logging |
Getting started with the Red Hat Hybrid Cloud Console | Getting started with the Red Hat Hybrid Cloud Console Red Hat Hybrid Cloud Console 1-latest How to navigate the features and services of the Red Hat Hybrid Cloud Console Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html-single/getting_started_with_the_red_hat_hybrid_cloud_console/index |
Chapter 2. Working with pods | Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Note For the maximum number of pods per OpenShift Container Platform node host, see the Cluster Limits. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod from a Rails application. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example namespace: default selfLink: /api/v1/namespaces/default/pods/example uid: 5cc30063-0265780783bc resourceVersion: '165032' creationTimestamp: '2019-02-13T20:31:37Z' labels: app: hello-openshift 1 annotations: openshift.io/scc: anyuid spec: restartPolicy: Always 2 serviceAccountName: default imagePullSecrets: - name: default-dockercfg-5zrhb priority: 0 schedulerName: default-scheduler terminationGracePeriodSeconds: 30 nodeName: ip-10-0-140-16.us-east-2.compute.internal securityContext: 3 seLinuxOptions: level: 's0:c11,c10' containers: 4 - resources: {} terminationMessagePath: /dev/termination-log name: hello-openshift securityContext: capabilities: drop: - MKNOD procMount: Default ports: - containerPort: 8080 protocol: TCP imagePullPolicy: Always volumeMounts: 5 - name: default-token-wbqsl readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount 6 terminationMessagePolicy: File image: registry.redhat.io/openshift4/ose-ogging-eventrouter:v4.3 7 serviceAccount: default 8 volumes: 9 - name: default-token-wbqsl secret: secretName: default-token-wbqsl defaultMode: 420 dnsPolicy: ClusterFirst status: phase: Pending conditions: - type: Initialized status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' - type: Ready status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: ContainersReady status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: PodScheduled status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' hostIP: 10.0.140.16 startTime: '2019-02-13T20:31:37Z' containerStatuses: - name: hello-openshift state: waiting: reason: ContainerCreating lastState: {} ready: false restartCount: 0 image: openshift/hello-openshift imageID: '' qosClass: BestEffort 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. In this case, there is a volume for storing access to credentials the registry needs for making requests against the OpenShift Container Platform API. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 Pods making requests against the OpenShift Container Platform API is a common enough pattern that there is a serviceAccount field for specifying which service account user the pod should authenticate as when making the requests. This enables fine-grained access control for custom infrastructure components. 9 The pod defines storage volumes that are available to its container(s) to use. In this case, it provides an ephemeral volume for a secret volume containing the default service account tokens. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.3.3.2. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Important Specifying the unhealthy pod eviction policy for pod disruption budgets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To use this Technology Preview feature, you must have enabled the TechPreviewNoUpgrade feature set. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation 2.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 2.3.5. Reducing pod timeouts when using persistent volumes with high file counts If a storage volume contains many files (~1,000,000 or greater), you might experience pod timeouts. This can occur because, when volumes are mounted, OpenShift Container Platform recursively changes the ownership and permissions of the contents of each volume in order to match the fsGroup specified in a pod's securityContext . For large volumes, checking and changing the ownership and permissions can be time consuming, resulting in a very slow pod startup. You can reduce this delay by applying one of the following workarounds: Use a security context constraint (SCC) to skip the SELinux relabeling for a volume. Use the fsGroupChangePolicy field inside an SCC to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. Use the Cluster Resource Override Operator to automatically apply an SCC to skip the SELinux relabeling. Use a runtime class to skip the SELinux relabeling for a volume. For information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 2.4. Automatically scaling pods with the horizontal pod autoscaler As a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. You can create an HPA for any deployment, deployment config, replica set, replication controller, or stateful set. For information on scaling pods based on custom metrics, see Automatically scaling pods based on custom metrics . Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. For more information on these objects, see Understanding Deployment and DeploymentConfig objects . 2.4.1. Understanding horizontal pod autoscalers You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available. For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase. OpenShift Container Platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use. To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. 2.4.1.1. Supported metrics The following metrics are supported by horizontal pod autoscalers: Table 2.1. Metrics Metric Description API version CPU utilization Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. autoscaling/v1 , autoscaling/v2 Memory utilization Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. autoscaling/v2 Important For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. The following example shows autoscaling for the image-registry Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: USD oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75 Example output horizontalpodautoscaler.autoscaling/image-registry autoscaled Sample HPA for the image-registry Deployment object with minReplicas set to 3 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: image-registry namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: image-registry targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 View the new state of the deployment: USD oc get deployment image-registry There are now 5 pods in the deployment: Example output NAME REVISION DESIRED CURRENT TRIGGERED BY image-registry 1 5 5 config 2.4.2. How does the HPA work? The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed. Figure 2.1. High level workflow of the HPA The HPA is an API resource in the Kubernetes autoscaling API group. The autoscaler works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the YAML file for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA. If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas. The HPA is configured to fetch metrics from metrics.k8s.io , which is provided by the metrics server. Because of the dynamic nature of metrics evaluation, the number of replicas can fluctuate during scaling for a group of replicas. Note To implement the HPA, all targeted pods must have a resource request set on their containers. 2.4.3. About requests and limits The scheduler uses the resource request that you specify for containers in a pod, to decide which node to place the pod on. The kubelet enforces the resource limit that you specify for a container to ensure that the container is not allowed to use more than the specified limit. The kubelet also reserves the request amount of that system resource specifically for that container to use. How to use resource metrics? In the pod specifications, you must specify the resource requests, such as CPU and memory. The HPA uses this specification to determine the resource utilization and then scales the target up or down. For example, the HPA object uses the following metric source: type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 In this example, the HPA keeps the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current resource usage to the requested resource of the pod. 2.4.4. Best practices All pods must have resource requests configured The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an OpenShift Container Platform cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA. Configure the cool down period During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the stabilizationWindowSeconds field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a desired state and avoid unwanted changes to workload scale. For example, a stabilization window is specified for the scaleDown field: behavior: scaleDown: stabilizationWindowSeconds: 300 In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later. 2.4.4.1. Scaling policies The autoscaling/v2 API allows you to add scaling policies to a horizontal pod autoscaler. A scaling policy controls how the OpenShift Container Platform horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a stabilization window , which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations. Sample HPA object with a scaling policy apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0 ... 1 Specifies the direction for the scaling policy, either scaleDown or scaleUp . This example creates a policy for scaling down. 2 Defines the scaling policy. 3 Determines if the policy scales by a specific number of pods or a percentage of pods during each iteration. The default value is pods . 4 Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods. 5 Determines the length of a scaling iteration. The default value is 15 seconds. 6 The default value for scaling down by percentage is 100%. 7 Determines which policy to use first, if multiple policies are defined. Specify Max to use the policy that allows the highest amount of change, Min to use the policy that allows the lowest amount of change, or Disabled to prevent the HPA from scaling in that policy direction. The default value is Max . 8 Determines the time period the HPA should look back at desired states. The default value is 0 . 9 This example creates a policy for scaling up. 10 Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%. 11 Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%. Example policy for scaling down apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: ... minReplicas: 20 ... behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the selectPolicy . If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the type: Percent and value: 10 parameters), over one minute ( periodSeconds: 60 ). For the iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time ( type: Pods and value: 4 ), over 30 seconds ( periodSeconds: 30 ), until there are 20 replicas remaining ( minReplicas ). The selectPolicy: Disabled parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed. If set, you can view the scaling policy by using the oc edit command: USD oc edit hpa hpa-resource-metrics-memory Example output apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior:\ '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},\ "ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":60},{"Type":"Percent","Value":10,"PeriodSeconds":60}]}}' ... 2.4.5. Creating a horizontal pod autoscaler by using the web console From the web console, you can create a horizontal pod autoscaler (HPA) that specifies the minimum and maximum number of pods you want to run on a Deployment or DeploymentConfig object. You can also define the amount of CPU or memory usage that your pods should target. Note An HPA cannot be added to deployments that are part of an Operator-backed service, Knative service, or Helm chart. Procedure To create an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Add HorizontalPodAutoscaler to open the Add HorizontalPodAutoscaler form. Figure 2.2. Add HorizontalPodAutoscaler From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click Save . Note If any of the values for CPU and memory usage are missing, a warning is displayed. To edit an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Edit HorizontalPodAutoscaler to open the Edit Horizontal Pod Autoscaler form. From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum pod limits and the CPU and memory usage, and click Save . Note While creating or editing the horizontal pod autoscaler in the web console, you can switch from Form view to YAML view . To remove an HPA in the web console: In the Topology view, click the node to reveal the side panel. From the Actions drop-down list, select Remove HorizontalPodAutoscaler . In the confirmation pop-up window, click Remove to remove the HPA. 2.4.6. Creating a horizontal pod autoscaler for CPU utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the CPU usage you specify. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. When autoscaling for CPU utilization, you can use the oc autoscale command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. To autoscale for a specific CPU value, create a HorizontalPodAutoscaler object with the target CPU and pod limits. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To create a horizontal pod autoscaler for CPU utilization: Perform one of the following: To scale based on the percent of CPU utilization, create a HorizontalPodAutoscaler object for an existing object: USD oc autoscale <object_type>/<name> \ 1 --min <number> \ 2 --max <number> \ 3 --cpu-percent=<percent> 4 1 Specify the type and name of the object to autoscale. The object must exist and be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 2 Optionally, specify the minimum number of replicas when scaling down. 3 Specify the maximum number of replicas when scaling up. 4 Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. For example, the following command shows autoscaling for the image-registry Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: USD oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75 To scale for a specific CPU value, create a YAML file similar to the following for an existing object: Create a YAML file similar to the following: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify cpu for CPU utilization. 10 Set to AverageValue . 11 Set to averageValue with the targeted CPU value. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml Verify that the horizontal pod autoscaler was created: USD oc get hpa cpu-autoscale Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m 2.4.7. Creating a horizontal pod autoscaler object for memory utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Example output Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure To create a horizontal pod autoscaler for memory utilization: Create a YAML file for one of the following: To scale for a specific memory value, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , or Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set the type to AverageValue . 11 Specify averageValue and a specific memory value. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. To scale for a percentage, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a ReplicationController, use v1 . For a DeploymentConfig, use apps.openshift.io/v1 . For a Deployment, ReplicaSet, Statefulset object, use apps/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set to Utilization . 11 Specify averageUtilization and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml For example: USD oc create -f hpa.yaml Example output horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created Verify that the horizontal pod autoscaler was created: USD oc get hpa hpa-resource-metrics-memory Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m USD oc describe hpa hpa-resource-metrics-memory Example output Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target 2.4.8. Understanding horizontal pod autoscaler status conditions by using the CLI You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way. The HPA status conditions are available with the v2 version of the autoscaling API. The HPA responds with the following status conditions: The AbleToScale condition indicates whether HPA is able to fetch and update metrics, as well as whether any backoff-related conditions could prevent scaling. A True condition indicates scaling is allowed. A False condition indicates scaling is not allowed for the reason specified. The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired metrics. A True condition indicates metrics is working properly. A False condition generally indicates a problem with fetching metrics. The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale. A False condition indicates that the requested scaling is allowed. USD oc describe hpa cm-test Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: 1 The horizontal pod autoscaler status messages. The following is an example of a pod that is unable to scale: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "ReplicationController" in group "apps" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind "ReplicationController" in group "apps" The following is an example of a pod that could not obtain the needed metrics for scaling: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API The following is an example of a pod where the requested autoscaling was less than the required minimums: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.8.1. Viewing horizontal pod autoscaler status conditions by using the CLI You can view the status conditions set on a pod by the horizontal pod autoscaler (HPA). Note The horizontal pod autoscaler status conditions are available with the v2 version of the autoscaling API. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: USD oc describe hpa <pod-name> For example: USD oc describe hpa cm-test The conditions appear in the Conditions field in the output. Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.9. Additional resources For more information on replication controllers and deployment controllers, see Understanding deployments and deployment configs . For an example on the usage of HPA, see Horizontal Pod Autoscaling of Quarkus Application Based on Memory Utilization . 2.5. Automatically adjust pod resource levels with the vertical pod autoscaler The OpenShift Container Platform Vertical Pod Autoscaler Operator (VPA) automatically reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. The VPA uses individual custom resources (CR) to update all of the pods associated with a workload object, such as a Deployment , DeploymentConfig , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController , in a project. The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle. 2.5.1. About the Vertical Pod Autoscaler Operator The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project. The VPA Operator consists of three components, each of which has its own pod in the VPA namespace: Recommender The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object. Updater The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests. Admission controller The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions. You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms. The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough. The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources. For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod. Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. Note If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the VPA. 2.5.2. Installing the Vertical Pod Autoscaler Operator You can use the OpenShift Container Platform web console to install the Vertical Pod Autoscaler Operator (VPA). Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose VerticalPodAutoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-vertical-pod-autoscaler namespace, which is automatically created if it does not exist. Click Install . Verifiction Verify the installation by listing the VPA Operator components: Navigate to Workloads Pods . Select the openshift-vertical-pod-autoscaler project from the drop-down menu and verify that there are four pods running. Navigate to Workloads Deployments to verify that there are four deployments running. Optional: Verify the installation in the OpenShift Container Platform CLI using the following command: USD oc get all -n openshift-vertical-pod-autoscaler The output shows four pods and four deployments: Example output NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s 2.5.3. About Using the Vertical Pod Autoscaler Operator To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with a deployment, stateful set, job, daemon set, replica set, or replication controller workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: The Auto and Recreate modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. The VPA deletes any pods in the project that are out of alignment with its recommendations. When redeployed by the workload object, the VPA updates the new pods with its recommendations. The Initial mode automatically applies VPA recommendations only at pod creation. The Off mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The off mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. For example, a pod has the following limits and requests: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi After creating a VPA that is set to auto , the VPA learns the resource usage and deletes the pod. When redeployed, the pod uses the new resource limits and requests: resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml After a few minutes, the output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... The output shows the recommended resources, target , the minimum recommended resources, lowerBound , the highest recommended resources, upperBound , and the most recent resource recommendations, uncappedTarget . The VPA uses the lowerBound and upperBound values to determine if a pod needs to be updated. If a pod has resource requests below the lowerBound values or above the upperBound values, the VPA terminates and recreates the pod with the target values. 2.5.3.1. Changing the VPA minimum value By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete and update their pods. As a result, workload objects that specify fewer than two replicas are not automatically acted upon by the VPA. The VPA does update new pods from these workload objects if the pods are restarted by some process external to the VPA. You can change this cluster-wide minimum value by modifying the minReplicas parameter in the VerticalPodAutoscalerController custom resource (CR). For example, if you set minReplicas to 3 , the VPA does not delete and update pods for workload objects that specify fewer than three replicas. Note If you set minReplicas to 1 , the VPA can delete the only pod for a workload object that specifies only one replica. You should use this setting with one-replica objects only if your workload can tolerate downtime whenever the VPA deletes a pod to adjust its resources. To avoid unwanted downtime with one-replica objects, configure the VPA CRs with the podUpdatePolicy set to Initial , which automatically updates the pod only when it is restarted by some process external to the VPA, or Off , which allows you to update the pod manually at an appropriate time for your application. Example VerticalPodAutoscalerController object apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: "2021-04-21T19:29:49Z" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: "142172" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 1 Specify the minimum number of replicas in a workload object for the VPA to act on. Any objects with replicas fewer than the minimum are not automatically deleted by the VPA. 2.5.3.2. Automatically applying VPA recommendations To use the VPA to automatically update pods, create a VPA CR for a specific workload object with updateMode set to Auto or Recreate . When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes any pods that do not meet the VPA recommendations for CPU and memory. When redeployed, the pods use the new resource limits and requests based on the VPA recommendations, honoring any pod disruption budget set for your applications. The recommendations are added to the status field of the VPA CR for reference. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . Example VPA CR for the Auto mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto or Recreate : Auto . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. Recreate . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Note Before a VPA can determine recommendations for resources and apply the recommended resources to new pods, operating pods must exist and be running in the project. If a workload's resource usage, such as CPU and memory, is consistent, the VPA can determine recommendations for resources in a few minutes. If a workload's resource usage is inconsistent, the VPA must collect metrics at various resource usage intervals for the VPA to make an accurate recommendation. 2.5.3.3. Automatically applying VPA recommendations on pod creation To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with updateMode set to Initial . Then, manually delete any pods associated with the workload object that you want to use the VPA recommendations. In the Initial mode, the VPA does not delete pods and does not update the pods as it learns new resource recommendations. Example VPA CR for the Initial mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Initial" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Initial . The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.3.4. Manually applying VPA recommendations To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with updateMode set to off . When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the status field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. Example VPA CR for the Off mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Off" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Off . You can view the recommendations using the following command. USD oc get vpa <vpa-name> --output yaml With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.3.5. Exempting containers from applying VPA recommendations If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a resourcePolicy to opt-out specific containers. When the VPA updates the pods with recommended resources, any containers with a resourcePolicy are not updated and the VPA does not present recommendations for those containers in the pod. apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto , Recreate , or Off . The Recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. 4 Specify the containers you want to opt-out and set mode to Off . For example, a pod has two containers, the same resource requests and limits: # ... spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi # ... After launching a VPA CR with the backend container set to opt-out, the VPA terminates and recreates the pod with the recommended resources applied only to the frontend container: ... spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k ... name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi ... 2.5.3.6. Using an alternative recommender You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads. For example, the default recommender might not accurately predict future resource usage when containers exhibit certain resource behaviors, such as cyclical patterns that alternate between usage spikes and idling as used by monitoring applications, or recurring and repeating patterns used with deep learning applications. Using the default recommender with these usage behaviors might result in significant over-provisioning and Out of Memory (OOM) kills for your applications. Note Instructions for how to create a recommender are beyond the scope of this documentation, Procedure To use an alternative recommender for your pods: Create a service account for the alternative recommender and bind that service account to the required cluster role: apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> 1 Creates a service account for the recommender in the namespace where the recommender is deployed. 2 Binds the recommender service account to the metrics-reader role. Specify the namespace where the recommender is to be deployed. 3 Binds the recommender service account to the vpa-actor role. Specify the namespace where the recommender is to be deployed. 4 Binds the recommender service account to the vpa-target-reader role. Specify the namespace where the recommender is to be deployed. To add the alternative recommender to the cluster, create a Deployment object similar to the following: apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true 1 Creates a container for your alternative recommender. 2 Specifies your recommender image. 3 Associates the service account that you created for the recommender. A new pod is created for the alternative recommender in the same namespace. USD oc get pods Example output NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s Configure a VPA CR that includes the name of the alternative recommender Deployment object. Example VPA CR to include the alternative recommender apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: "apps/v1" kind: Deployment 2 name: frontend 1 Specifies the name of the alternative recommender deployment. 2 Specifies the name of an existing workload object you want this VPA to manage. 2.5.4. Using the Vertical Pod Autoscaler Operator You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. Prerequisites The workload object that you want to autoscale must exist. If you want to use an alternative recommender, a deployment including that recommender must exist. Procedure To create a VPA CR for a specific workload object: Change to the project where the workload object you want to scale is located. Create a VPA CR YAML file: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" recommenders: 5 - name: my-recommender 1 Specify the type of workload object you want this VPA to manage: Deployment , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController . 2 Specify the name of an existing workload object you want this VPA to manage. 3 Specify the VPA mode: auto to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. recreate to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. initial to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. off to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. 4 Optional. Specify the containers you want to opt-out and set the mode to Off . 5 Optional. Specify an alternative recommender. Create the VPA CR: USD oc create -f <file-name>.yaml After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml The output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... 1 lowerBound is the minimum recommended resource levels. 2 target is the recommended resource levels. 3 upperBound is the highest recommended resource levels. 4 uncappedTarget is the most recent resource recommendations. 2.5.5. Uninstalling the Vertical Pod Autoscaler Operator You can remove the Vertical Pod Autoscaler Operator (VPA) from your OpenShift Container Platform cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the Vertical Pod Autoscaler Operator. Note You can remove a specific VPA CR by using the oc delete vpa <vpa-name> command. The same actions apply for resource requests as uninstalling the vertical pod autoscaler. After removing the VPA Operator, it is recommended that you remove the other components associated with the Operator to avoid potential issues. Prerequisites The Vertical Pod Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-vertical-pod-autoscaler project. For the VerticalPodAutoscaler Operator, click the Options menu and select Uninstall Operator . Optional: To remove all operands associated with the Operator, in the dialog box, select Delete all operand instances for this operator checkbox. Click Uninstall . Optional: Use the OpenShift CLI to remove the VPA components: Delete the VPA namespace: USD oc delete namespace openshift-vertical-pod-autoscaler Delete the VPA custom resource definition (CRD) objects: USD oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io USD oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io USD oc delete crd verticalpodautoscalers.autoscaling.k8s.io Deleting the CRDs removes the associated roles, cluster roles, and role bindings. Note This action removes from the cluster all user-created VPA CRs. If you re-install the VPA, you must create these objects again. Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration vpa-webhook-config Delete the VPA Operator: USD oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler 2.6. Providing sensitive data to pods Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.6.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.6.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.6.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.6.1.3. About automatically generated service account token secrets When a service account is created, a service account token secret is automatically generated for it. This service account token secret, along with an automatically generated docker configuration secret, is used to authenticate to the internal OpenShift Container Platform registry. Do not rely on these automatically generated secrets for your own use; they might be removed in a future OpenShift Container Platform release. Note Prior to OpenShift Container Platform 4.11, a second service account token secret was generated when a service account was created. This service account token secret was used to access the Kubernetes API. Starting with OpenShift Container Platform 4.11, this second service account token secret is no longer created. This is because the LegacyServiceAccountTokenNoAutoGeneration upstream Kubernetes feature gate was enabled, which stops the automatic generation of secret-based service account tokens to access the Kubernetes API. After upgrading to 4.13, any existing service account token secrets are not deleted and continue to function. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. For more information, see Configuring bound service account tokens using volume projection . You can also manually create a service account token secret to obtain a token, if the security exposure of a non-expiring token in a readable API object is acceptable to you. For more information, see Creating a service account token secret . Additional resources For information about requesting bound service account tokens, see Using bound service account tokens For information about creating a service account token secret, see Creating a service account token secret . 2.6.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.6.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.6.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file on a control plane node. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.3. Creating a service account token secret As an administrator, you can create a service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Note It is recommended to obtain bound service account tokens using the TokenRequest API instead of using service account token secrets. The tokens obtained from the TokenRequest API are more secure than the tokens stored in secrets, because they have a bounded lifetime and are not readable by other API clients. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a non-expiring token in a readable API object is acceptable to you. See the Additional resources section that follows for information on creating bound service account tokens. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object: apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . For information on requesting bound service account tokens, see Using bound service account tokens For information on creating service accounts, see Understanding and creating service accounts . 2.6.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object: apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets . 2.6.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file on a control plane node. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources For more information on using secrets in pods, see Understanding how to create secrets . 2.6.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.6.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.6.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.6.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.6.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.6.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.7. Creating and using config maps The following sections define config maps and how to create and use them. 2.7.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.7.2. Creating a config map in the OpenShift Container Platform web console You can create a config map in the OpenShift Container Platform web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.7.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.7.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.7.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Container Platform detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.7.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.7.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.7.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.7.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.7.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.8. Using device plugins to access external resources with pods Device plugins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 2.8.1. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 2.8.1.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 2.8.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 2.8.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 2.9. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling. 2.9.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.9.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are sdn-ovs , sdn , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd sdn sdn-ovs sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.9.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.9.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.9.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.9.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Container Platform respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.9.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.9.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Create one or more priority classes: Create a YAML file similar to the following: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: "This priority class should be used for XYZ service pods only." 5 1 The name of the priority class object. 2 The priority value of the object. 3 Optional. Specifies whether this priority class is preempting or non-preempting. The preemption policy defaults to PreemptLowerPriority , which allows pods of that priority class to preempt lower-priority pods. If the preemption policy is set to Never , pods in that priority class are non-preempting. 4 Optional. Specifies whether this priority class should be used for pods without a priority class name specified. This field is false by default. Only one priority class with globalDefault set to true can exist in the cluster. If there is no priority class with globalDefault:true , the priority of pods with no priority class name is zero. Adding a priority class with globalDefault:true affects only pods created after the priority class is added and does not change the priorities of existing pods. 5 Optional. Describes which pods developers should use with this priority class. Enter an arbitrary text string. Create the priority class: USD oc create -f <file-name>.yaml Create a pod spec to include the name of a priority class: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: high-priority 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.10. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.10.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.26.0 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 2.11. Run Once Duration Override Operator 2.11.1. Run Once Duration Override Operator overview You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. 2.11.1.1. About the Run Once Duration Override Operator OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure . Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that those run-once pods can be active. After the time limit expires, the cluster will try to actively terminate those pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.11.2. Run Once Duration Override Operator release notes Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that run-once pods can be active. After the time limit expires, the cluster tries to terminate the run-once pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. These release notes track the development of the Run Once Duration Override Operator for OpenShift Container Platform. For an overview of the Run Once Duration Override Operator, see About the Run Once Duration Override Operator . 2.11.2.1. Run Once Duration Override Operator 1.0.2 Issued: 26 November 2024 The following advisory is available for the Run Once Duration Override Operator 1.0.2: RHEA-2024:9999 2.11.2.1.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.11.2.2. Run Once Duration Override Operator 1.0.1 Issued: 26 October 2023 The following advisory is available for the Run Once Duration Override Operator 1.0.1: RHSA-2023:5947 2.11.2.2.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.11.2.3. Run Once Duration Override Operator 1.0.0 Issued: 18 May 2023 The following advisory is available for the Run Once Duration Override Operator 1.0.0: RHEA-2023:2035 2.11.2.3.1. New features and enhancements This is the initial, generally available release of the Run Once Duration Override Operator. For installation information, see Installing the Run Once Duration Override Operator . 2.11.3. Overriding the active deadline for run-once pods You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. By enabling the run-once duration override on a namespace, all future run-once pods created or updated in that namespace have their activeDeadlineSeconds field set to the value specified by the Run Once Duration Override Operator. Note If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.11.3.1. Installing the Run Once Duration Override Operator You can use the web console to install the Run Once Duration Override Operator. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Run Once Duration Override Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-run-once-duration-override-operator in the Name field and click Create . Install the Run Once Duration Override Operator. Navigate to Operators OperatorHub . Enter Run Once Duration Override Operator into the filter box. Select the Run Once Duration Override Operator and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Run Once Duration Override Operator. Select A specific namespace on the cluster . Choose openshift-run-once-duration-override-operator from the dropdown menu under Installed namespace . Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Create a RunOnceDurationOverride instance. From the Operators Installed Operators page, click Run Once Duration Override Operator . Select the Run Once Duration Override tab and click Create RunOnceDurationOverride . Edit the settings as necessary. Under the runOnceDurationOverride section, you can update the spec.activeDeadlineSeconds value, if required. The predefined value is 3600 seconds, or 1 hour. Click Create . Verification Log in to the OpenShift CLI. Verify all pods are created and running properly. USD oc get pods -n openshift-run-once-duration-override-operator Example output NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s 2.11.3.2. Enabling the run-once duration override on a namespace To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. Prerequisites The Run Once Duration Override Operator is installed. Procedure Log in to the OpenShift CLI. Add the label to enable the run-once duration override to your namespace: USD oc label namespace <namespace> \ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true 1 Specify the namespace to enable the run-once duration override on. After you enable the run-once duration override on this namespace, future run-once pods that are created in this namespace will have their activeDeadlineSeconds field set to the override value from the Run Once Duration Override Operator. Existing pods in this namespace will also have their activeDeadlineSeconds value set when they are updated . Verification Create a test run-once pod in the namespace that you enabled the run-once duration override on: apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done 1 Replace <namespace> with the name of your namespace. 2 The restartPolicy must be Never or OnFailure to be a run-once pod. Verify that the pod has its activeDeadlineSeconds field set: USD oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds Example output activeDeadlineSeconds: 3600 2.11.3.3. Updating the run-once active deadline override value You can customize the override value that the Run Once Duration Override Operator applies to run-once pods. The predefined value is 3600 seconds, or 1 hour. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift CLI. Edit the RunOnceDurationOverride resource: USD oc edit runoncedurationoverride cluster Update the activeDeadlineSeconds field: apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: # ... spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1 # ... 1 Set the activeDeadlineSeconds field to the desired value, in seconds. Save the file to apply the changes. Any future run-once pods created in namespaces where the run-once duration override is enabled will have their activeDeadlineSeconds field set to this new value. Existing run-once pods in these namespaces will receive this new value when they are updated. 2.11.4. Uninstalling the Run Once Duration Override Operator You can remove the Run Once Duration Override Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 2.11.4.1. Uninstalling the Run Once Duration Override Operator You can use the web console to uninstall the Run Once Duration Override Operator. Uninstalling the Run Once Duration Override Operator does not unset the activeDeadlineSeconds field for run-once pods, but it will no longer apply the override value to future run-once pods. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select openshift-run-once-duration-override-operator from the Project dropdown list. Delete the RunOnceDurationOverride instance. Click Run Once Duration Override Operator and select the Run Once Duration Override tab. Click the Options menu to the cluster entry and select Delete RunOnceDurationOverride . In the confirmation dialog, click Delete . Uninstall the Run Once Duration Override Operator Operator. Navigate to Operators Installed Operators . Click the Options menu to the Run Once Duration Override Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 2.11.4.2. Uninstalling Run Once Duration Override Operator resources Optionally, after uninstalling the Run Once Duration Override Operator, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have uninstalled the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were created when the Run Once Duration Override Operator was installed: Navigate to Administration CustomResourceDefinitions . Enter RunOnceDurationOverride in the Name field to filter the CRDs. Click the Options menu to the RunOnceDurationOverride CRD and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . Delete the openshift-run-once-duration-override-operator namespace. Navigate to Administration Namespaces . Enter openshift-run-once-duration-override-operator into the filter box. Click the Options menu to the openshift-run-once-duration-override-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-run-once-duration-override-operator and click Delete . Remove the run-once duration override label from the namespaces that it was enabled on. Navigate to Administration Namespaces . Select your namespace. Click Edit to the Labels field. Remove the runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true label and click Save . | [
"kind: Pod apiVersion: v1 metadata: name: example namespace: default selfLink: /api/v1/namespaces/default/pods/example uid: 5cc30063-0265780783bc resourceVersion: '165032' creationTimestamp: '2019-02-13T20:31:37Z' labels: app: hello-openshift 1 annotations: openshift.io/scc: anyuid spec: restartPolicy: Always 2 serviceAccountName: default imagePullSecrets: - name: default-dockercfg-5zrhb priority: 0 schedulerName: default-scheduler terminationGracePeriodSeconds: 30 nodeName: ip-10-0-140-16.us-east-2.compute.internal securityContext: 3 seLinuxOptions: level: 's0:c11,c10' containers: 4 - resources: {} terminationMessagePath: /dev/termination-log name: hello-openshift securityContext: capabilities: drop: - MKNOD procMount: Default ports: - containerPort: 8080 protocol: TCP imagePullPolicy: Always volumeMounts: 5 - name: default-token-wbqsl readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount 6 terminationMessagePolicy: File image: registry.redhat.io/openshift4/ose-ogging-eventrouter:v4.3 7 serviceAccount: default 8 volumes: 9 - name: default-token-wbqsl secret: secretName: default-token-wbqsl defaultMode: 420 dnsPolicy: ClusterFirst status: phase: Pending conditions: - type: Initialized status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' - type: Ready status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: ContainersReady status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: PodScheduled status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' hostIP: 10.0.140.16 startTime: '2019-02-13T20:31:37Z' containerStatuses: - name: hello-openshift state: waiting: reason: ContainerCreating lastState: {} ready: false restartCount: 0 image: openshift/hello-openshift imageID: '' qosClass: BestEffort",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75",
"horizontalpodautoscaler.autoscaling/image-registry autoscaled",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: image-registry namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: image-registry targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0",
"oc get deployment image-registry",
"NAME REVISION DESIRED CURRENT TRIGGERED BY image-registry 1 5 5 config",
"type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60",
"behavior: scaleDown: stabilizationWindowSeconds: 300",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled",
"oc edit hpa hpa-resource-metrics-memory",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4",
"oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11",
"oc create -f <file-name>.yaml",
"oc get hpa cpu-autoscale",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler",
"Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max",
"oc create -f <file-name>.yaml",
"oc create -f hpa.yaml",
"horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created",
"oc get hpa hpa-resource-metrics-memory",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m",
"oc describe hpa hpa-resource-metrics-memory",
"Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc describe hpa <pod-name>",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc get all -n openshift-vertical-pod-autoscaler",
"NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s",
"resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi",
"resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3",
"oc get vpa <vpa-name> --output yaml",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"",
"spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>",
"apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true",
"oc get pods",
"NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender",
"oc create -f <file-name>.yaml",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"oc delete namespace openshift-vertical-pod-autoscaler",
"oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io",
"oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io",
"oc delete crd verticalpodautoscalers.autoscaling.k8s.io",
"oc delete MutatingWebhookConfiguration vpa-webhook-config",
"oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: high-priority 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.26.0",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc get pods -n openshift-run-once-duration-override-operator",
"NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s",
"oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true",
"apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done",
"oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds",
"activeDeadlineSeconds: 3600",
"oc edit runoncedurationoverride cluster",
"apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/working-with-pods |
Chapter 6. Red Hat Virtualization 4.3 Batch Update 4 (ovirt-4.3.7) | Chapter 6. Red Hat Virtualization 4.3 Batch Update 4 (ovirt-4.3.7) 6.1. Red Hat Virtualization Host 4.3.7 Image The following table outlines the packages included in the redhat-virtualization-host-4.3.7 image. Table 6.1. Red Hat Virtualization Host 4.3.7 Image Name Version Advisory PyYAML 3.10-11.el7 RHEA-2018:2086 Red_Hat_Enterprise_Linux-Release_Notes-7-en-US 7-2.el7 RHEA-2015:2461 acl 2.2.51-14.el7 RHBA-2018:0772 aide 0.15.1-13.el7 RHBA-2017:2050 alsa-firmware 1.0.28-2.el7 RHBA-2015:0461 alsa-tools 1.1.0-1.el7 RHEA-2016:2437 attr 2.4.46-13.el7 RHBA-2018:0768 authconfig 6.2.8-30.el7 RHSA-2017:2285 avahi 0.6.31-19.el7 RHBA-2018:1001 boost 1.53.0-27.el7 RHBA-2017:3149 btrfs-progs 4.9.1-1.el7 RHBA-2017:2268 bzip2 1.0.6-13.el7 RHBA-2015:2156 cdrkit 1.1.11-25.el7 RHBA-2018:3109 ceph-common 10.2.5-4.el7 RHBA-2018:3189 checkpolicy 2.5-8.el7 RHBA-2018:3099 chkconfig 1.7.4-1.el7 RHBA-2017:2164 clevis 7-8.el7 RHBA-2018:3298 coolkey 1.1.0-40.el7 RHBA-2018:3263 cpio 2.11-27.el7 RHBA-2018:0693 cracklib 2.9.0-11.el7 RHEA-2017:1091 cyrus-sasl 2.1.26-23.el7 RHBA-2018:0777 ding-libs 0.6.1-32.el7 RHBA-2018:3160 dmraid 1.0.0.rc16-28.el7 RHBA-2016:2552 dosfstools 3.0.20-10.el7 RHBA-2018:3069 ebtables 2.0.10-16.el7 RHBA-2018:0941 efibootmgr 17-2.el7 RHEA-2018:3171 emacs 24.3-22.el7 RHBA-2018:3166 expat 2.1.0-10.el7_3 RHSA-2016:2824 file 5.11-35.el7 RHBA-2018:3079 filesystem 3.2-25.el7 RHEA-2018:0838 findutils 4.5.11-6.el7 RHBA-2018:3076 fipscheck 1.4.1-6.el7 RHBA-2017:1971 fuse 2.9.2-11.el7 RHSA-2018:3324 gawk 4.0.2-4.el7_3.1 RHBA-2017:1618 gettext 0.19.8.1-2.el7 RHBA-2017:2118 glib-networking 2.56.1-1.el7 RHSA-2018:3140 gluster-ansible-cluster 1.0-1.el7rhgs RHEA-2018:3494 gluster-ansible-maintenance 1.0.1-1.el7rhgs RHEA-2018:3494 gmp 6.0.0-15.el7 RHBA-2017:2069 gnupg2 2.0.22-5.el7_5 RHSA-2018:2181 gobject-introspection 1.56.1-1.el7 RHSA-2018:3140 gperftools 2.6.1-1.el7 RHBA-2018:0870 grep 2.20-3.el7 RHBA-2017:2200 gsettings-desktop-schemas 3.28.0-2.el7 RHSA-2018:3140 gzip 1.5-10.el7 RHBA-2018:0719 hivex 1.3.10-6.9.el7 RHBA-2018:0787 iotop 0.6-4.el7 RHBA-2018:3301 iperf3 3.1.7-2.el7 RHEA-2017:2065 ipmitool 1.8.18-7.el7 RHBA-2018:0832 iputils 20160308-10.el7 RHBA-2017:1987 jansson 2.10-1.el7 RHBA-2017:2195 jose 10-1.el7 RHBA-2018:0819 json-c 0.11-4.el7_0 RHSA-2014:0703 json-glib 1.4.2-2.el7 RHSA-2018:3140 kbd 1.15.5-15.el7 RHBA-2018:3219 less 458-9.el7 RHBA-2015:1521 libXext 1.3.3-3.el7 RHBA-2015:2082 libXfixes 5.0.3-1.el7 RHSA-2017:1865 libXxf86vm 1.1.4-1.el7 RHSA-2017:1865 libaio 0.3.109-13.el7 RHBA-2015:2162 libbytesize 1.2-1.el7 RHBA-2018:0868 libcacard 2.5.2-2.el7 RHEA-2016:2190 libcap-ng 0.7.5-4.el7 RHBA-2015:2161 libcroco 0.6.12-4.el7 RHSA-2018:3140 libepoxy 1.5.2-1.el7 RHSA-2018:3059 libfastjson 0.99.4-3.el7 RHEA-2018:3135 libffi 3.0.13-18.el7 RHBA-2016:2385 libgcrypt 1.5.3-14.el7 RHBA-2017:2006 libglvnd 1.0.1-0.8.git5baa1e5.el7 RHSA-2018:3059 libidn 1.28-4.el7 RHBA-2015:2100 libiscsi 1.9.0-7.el7 RHBA-2016:2416 liblognorm 2.0.2-3.el7 RHEA-2018:3135 libnetfilter_conntrack 1.0.6-1.el7_3 RHBA-2017:1301 libnfsidmap 0.25-19.el7 RHBA-2018:1016 libnl3 3.2.28-4.el7 RHSA-2017:2299 libpcap 1.5.3-11.el7 RHEA-2018:0694 libpciaccess 0.14-1.el7 RHBA-2018:0736 libpng 1.5.13-7.el7_2 RHSA-2015:2596 libproxy 0.4.11-11.el7 RHBA-2018:0746 libpwquality 1.2.3-5.el7 RHBA-2018:1014 libseccomp 2.3.1-3.el7 RHEA-2017:2165 libselinux 2.5-14.1.el7 RHBA-2018:3084 libsemanage 2.5-14.el7 RHBA-2018:3088 libsepol 2.5-10.el7 RHBA-2018:3077 libssh 0.7.1-7.el7 RHBA-2018:3712 libtar 1.2.11-29.el7 RHBA-2015:1014 libtasn1 4.10-1.el7 RHSA-2017:1860 libusbx 1.0.21-1.el7 RHBA-2018:0762 libuser 0.60-9.el7 RHBA-2018:1029 libvirt-python 4.5.0-1.el7 RHEA-2018:3204 libxcb 1.13-1.el7 RHSA-2018:3059 libxml2 2.9.1-6.el7_2.3 RHSA-2016:1292 libxshmfence 1.2-1.el7 RHBA-2015:2082 libxslt 1.1.28-5.el7 RHEA-2019:0045 libyaml 0.1.4-11.el7_0 RHSA-2015:0100 logrotate 3.8.6-17.el7 RHBA-2018:3202 lsof 4.87-6.el7 RHBA-2018:3046 lsscsi 0.27-6.el7 RHBA-2017:2001 lua 5.1.4-15.el7 RHBA-2016:2568 luksmeta 8-2.el7 RHEA-2018:3325 lzo 2.06-8.el7 RHBA-2015:2112 m2crypto 0.21.1-17.el7 RHBA-2015:2165 mailx 12.5-19.el7 RHBA-2018:0779 man-db 2.6.3-11.el7 RHBA-2018:3060 memtest86+ 5.01-2.el7 RHBA-2016:2256 mom 0.5.12-1.el7ev RHEA-2018:2620 mozjs17 17.0.0-20.el7 RHBA-2018:0745 mpfr 3.1.1-4.el7 RHEA-2017:1115 ncurses 5.9-14.20130511.el7_4 RHBA-2017:2586 netcf 0.2.8-4.el7 RHBA-2017:2220 nettle 2.7.1-8.el7 RHSA-2016:2582 numad 0.5-18.20150602git.el7 RHBA-2018:0996 oddjob 0.31.5-4.el7 RHBA-2015:0446 openldap 2.4.44-21.el7_6 RHBA-2019:0191 os-prober 1.58-9.el7 RHBA-2016:2351 osinfo-db-tools 1.1.0-1.el7 RHBA-2017:2113 p11-kit 0.23.5-3.el7 RHEA-2017:1981 pam 1.1.8-22.el7 RHBA-2018:0718 pam_pkcs11 0.6.2-30.el7 RHBA-2018:3258 pciutils 3.5.1-3.el7 RHBA-2018:0950 pcre 8.32-17.el7 RHBA-2017:1909 pcsc-lite 1.8.8-8.el7 RHBA-2018:3257 perl 5.16.3-294.el7_6 RHSA-2019:0109 perl-Getopt-Long 2.40-3.el7 RHBA-2018:0752 perl-Socket 2.010-4.el7 RHBA-2016:2196 pinentry 0.8.1-17.el7 RHBA-2016:2226 pixman 0.34.0-1.el7 RHBA-2016:2293 postfix 2.10.1-7.el7 RHBA-2018:3085 pth 2.0.7-23.el7 RHBA-2015:2085 pyOpenSSL 17.3.0-4.el7ost RHSA-2019:0212 pyparted 3.9-15.el7 RHBA-2018:0923 python-asn1crypto 0.23.0-2.el7ost RHBA-2018:3633 python-augeas 0.5.0-2.el7 RHBA-2015:2133 python-backports 1.0-8.el7 RHBA-2015:0576 python-backports-ssl_match_hostname 3.5.0.1-1.el7 RHBA-2018:0930 python-cffi 1.11.2-1.el7ost RHBA-2018:3633 python-cryptography 2.1.4-3.el7ost RHBA-2018:3633 python-daemon 1.6-5.el7 RHEA-2018:2620 python-dmidecode 3.12.2-3.el7 RHBA-2018:3314 python-dns 1.12.0-4.20150617git465785f.el7 RHBA-2017:1945 python-enum34 1.0.4-1.el7 RHEA-2015:2299 python-futures 3.1.1-5.el7 RHEA-2018:3162 python-gssapi 1.2.0-3.el7 RHBA-2017:2269 python-idna 2.5-1.el7ost RHBA-2018:3633 python-ipaddress 1.0.16-2.el7 RHBA-2016:2290 python-jmespath 0.9.0-4.el7ae RHEA-2018:2936 python-jwcrypto 0.4.2-1.el7 RHEA-2018:0723 python-ldap 2.4.15-2.el7 RHBA-2015:0531 python-lockfile 0.9.1-5.el7 RHEA-2019:0045 python-netaddr 0.7.19-5.el7ost RHEA-2019:0045 python-netifaces 0.10.4-3.el7 RHBA-2016:2267 python-nss 0.16.0-3.el7 RHBA-2015:2357 python-paramiko 2.1.1-9.el7 RHSA-2018:3347 python-passlib 1.6.5-1.1.el7 RHEA-2018:2936 python-pexpect 4.6-1.el7at RHSA-2019:0212 python-ply 3.4-11.el7 RHBA-2017:2304 python-prettytable 0.7.2-3.el7 RHEA-2019:0045 python-pthreading 0.1.3-3.el7ev RHEA-2018:2620 python-ptyprocess 0.5.2-3.el7at RHSA-2019:0212 python-pyasn1 0.1.9-7.el7 RHEA-2017:1245 python-pycparser 2.14-1.el7 RHEA-2015:2331 python-pyudev 0.15-9.el7 RHBA-2017:2188 python-qrcode 5.0.1-1.el7 RHEA-2015:0488 python-schedutils 0.4-6.el7 RHEA-2016:2393 python-setuptools 0.9.8-7.el7 RHBA-2017:1900 python-six 1.10.0-9.el7ost RHEA-2019:0045 python-slip 0.4.0-4.el7 RHBA-2018:0728 python-urlgrabber 3.10-9.el7 RHEA-2018:3130 python-webob 1.2.3-7.el7 RHEA-2018:1530 python-yubico 1.2.3-1.el7 RHBA-2015:2304 pyusb 1.0.0-0.11.b1.el7 RHEA-2015:0500 radvd 2.17-3.el7 RHBA-2018:3027 redhat-support-lib-python 0.9.7-6.el7 RHBA-2016:2516 rhn-client-tools 2.0.2-24.el7 RHBA-2018:3328 rhnlib 2.5.65-8.el7 RHBA-2018:3318 rhnsd 5.0.13-10.el7 RHBA-2018:0759 safelease 1.0-7.el7ev RHEA-2018:2620 satyr 0.13-15.el7 RHBA-2018:3285 screen 4.1.0-0.25.20120314git3c2946.el7 RHBA-2018:0834 scrub 2.5.2-7.el7 RHBA-2017:2216 seabios 1.11.0-2.el7 RHBA-2018:0814 setools 3.3.8-4.el7 RHBA-2018:3091 setup 2.8.71-10.el7 RHSA-2018:3249 shared-mime-info 1.8-4.el7 RHBA-2018:0769 socat 1.7.3.2-2.el7 RHBA-2017:2049 sqlite 3.7.17-8.el7 RHBA-2015:2150 squashfs-tools 4.3-0.21.gitaae0aff4.el7 RHBA-2015:0582 sshpass 1.06-2.el7 RHBA-2018:0489 supermin 5.1.19-1.el7 RHEA-2018:0792 syslinux 4.05-15.el7 RHEA-2018:3336 tar 1.26-35.el7 RHBA-2018:3300 telnet 0.17-64.el7 RHEA-2017:1881 texinfo 5.1-5.el7 RHBA-2018:0823 trousers 0.3.14-2.el7 RHBA-2017:2252 unbound 1.6.6-1.el7 RHBA-2018:0743 usbredir 0.7.1-3.el7 RHBA-2018:0672 userspace-rcu 0.7.9-2.el7rhgs RHBA-2016:1755 vhostmd 0.5-13.el7 RHBA-2018:3296 virt-what 1.18-4.el7 RHBA-2018:0896 wayland 1.15.0-1.el7 RHSA-2018:3140 wpa_supplicant 2.6-12.el7 RHSA-2018:3107 xdg-utils 1.1.0-0.17.20120809git.el7 RHBA-2016:2246 xz 5.2.2-1.el7 RHEA-2016:2198 yum-rhn-plugin 2.0.1-10.el7 RHBA-2018:0759 zip 3.0-11.el7 RHBA-2016:2294 zlib 1.2.7-18.el7 RHBA-2018:3299 6.2. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4.3-manager-rpms repository. Table 6.2. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) Name Version Advisory otopi-common 1.8.4-1.el7ev RHBA-2019:4233 otopi-debug-plugins 1.8.4-1.el7ev RHBA-2019:4233 otopi-java 1.8.4-1.el7ev RHBA-2019:4233 otopi-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-ansible-cluster-upgrade 1.1.14-1.el7ev RHBA-2019:4229 ovirt-ansible-infra 1.1.13-1.el7ev RHBA-2019:4229 ovirt-ansible-vm-infra 1.1.22-1.el7ev RHBA-2019:4229 ovirt-engine 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-backend 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-dbscripts 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-extensions-api-impl 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-extensions-api-impl-javadoc 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-health-check-bundler 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-metrics 1.3.5.1-1.el7ev RHBA-2019:4229 ovirt-engine-restapi 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-base 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-plugin-cinderlib 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-plugin-ovirt-engine 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-plugin-ovirt-engine-common 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-setup-plugin-websocket-proxy 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-tools 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-tools-backup 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-vmconsole-proxy-helper 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-webadmin-portal 4.3.7.2-1 RHBA-2019:4229 ovirt-engine-websocket-proxy 4.3.7.2-1 RHBA-2019:4229 ovirt-host-deploy-common 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-java 1.8.4-1.el7ev RHBA-2019:4233 ovirt-provider-ovn 1.2.27-1.el7ev RHBA-2019:4233 python2-otopi 1.8.4-1.el7ev RHBA-2019:4233 python2-otopi-devtools 1.8.4-1.el7ev RHBA-2019:4233 python2-ovirt-engine-lib 4.3.7.2-1 RHBA-2019:4229 python2-ovirt-host-deploy 1.8.4-1.el7ev RHBA-2019:4233 rhv-log-collector-analyzer 0.2.14-0.el7ev RHBA-2019:4229 rhvm 4.3.7.2-1 RHBA-2019:4229 rhvm-setup-plugins 4.3.5-1.el7ev RHBA-2019:4229 6.3. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms repository. Table 6.3. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs Name Version Advisory otopi-common 1.8.4-1.el7ev RHBA-2019:4233 otopi-debug-plugins 1.8.4-1.el7ev RHBA-2019:4233 otopi-java 1.8.4-1.el7ev RHBA-2019:4233 otopi-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-dependencies 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-deploy-common 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-java 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-provider-ovn-driver 1.2.27-1.el7ev RHBA-2019:4233 python2-otopi 1.8.4-1.el7ev RHBA-2019:4233 python2-otopi-devtools 1.8.4-1.el7ev RHBA-2019:4233 python2-ovirt-host-deploy 1.8.4-1.el7ev RHBA-2019:4233 vdsm 4.30.38-1.el7ev RHBA-2019:4230 vdsm-api 4.30.38-1.el7ev RHBA-2019:4230 vdsm-client 4.30.38-1.el7ev RHBA-2019:4230 vdsm-common 4.30.38-1.el7ev RHBA-2019:4230 vdsm-gluster 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-checkips 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-cpuflags 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-ethtool-options 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-extra-ipv4-addrs 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-fcoe 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-localdisk 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-macspoof 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-nestedvt 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-openstacknet 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vhostmd 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vmfex-dev 4.30.38-1.el7ev RHBA-2019:4230 vdsm-http 4.30.38-1.el7ev RHBA-2019:4230 vdsm-jsonrpc 4.30.38-1.el7ev RHBA-2019:4230 vdsm-network 4.30.38-1.el7ev RHBA-2019:4230 vdsm-python 4.30.38-1.el7ev RHBA-2019:4230 vdsm-yajsonrpc 4.30.38-1.el7ev RHBA-2019:4230 6.4. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms repository. Table 6.4. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) Name Version Advisory otopi-common 1.8.4-1.el7ev RHBA-2019:4233 otopi-debug-plugins 1.8.4-1.el7ev RHBA-2019:4233 otopi-java 1.8.4-1.el7ev RHBA-2019:4233 otopi-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-dependencies 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-deploy-common 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-java 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-provider-ovn-driver 1.2.27-1.el7ev RHBA-2019:4233 python2-otopi 1.8.4-1.el7ev RHBA-2019:4233 python2-otopi-devtools 1.8.4-1.el7ev RHBA-2019:4233 python2-ovirt-host-deploy 1.8.4-1.el7ev RHBA-2019:4233 vdsm 4.30.38-1.el7ev RHBA-2019:4230 vdsm-api 4.30.38-1.el7ev RHBA-2019:4230 vdsm-client 4.30.38-1.el7ev RHBA-2019:4230 vdsm-common 4.30.38-1.el7ev RHBA-2019:4230 vdsm-gluster 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-checkips 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-cpuflags 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-ethtool-options 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-extra-ipv4-addrs 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-fcoe 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-localdisk 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-macspoof 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-nestedvt 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-openstacknet 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vhostmd 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vmfex-dev 4.30.38-1.el7ev RHBA-2019:4230 vdsm-http 4.30.38-1.el7ev RHBA-2019:4230 vdsm-jsonrpc 4.30.38-1.el7ev RHBA-2019:4230 vdsm-network 4.30.38-1.el7ev RHBA-2019:4230 vdsm-python 4.30.38-1.el7ev RHBA-2019:4230 vdsm-yajsonrpc 4.30.38-1.el7ev RHBA-2019:4230 6.5. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-rpms repository. Table 6.5. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) Name Version Advisory otopi-common 1.8.4-1.el7ev RHBA-2019:4233 otopi-debug-plugins 1.8.4-1.el7ev RHBA-2019:4233 otopi-java 1.8.4-1.el7ev RHBA-2019:4233 otopi-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-ansible-hosted-engine-setup 1.0.32-1.el7ev RHBA-2019:4233 ovirt-host 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-dependencies 4.3.5-1.el7ev RHBA-2019:4230 ovirt-host-deploy-common 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-java 1.8.4-1.el7ev RHBA-2019:4233 ovirt-host-deploy-javadoc 1.8.4-1.el7ev RHBA-2019:4233 ovirt-hosted-engine-ha 2.3.6-1.el7ev RHBA-2019:4230 ovirt-provider-ovn-driver 1.2.27-1.el7ev RHBA-2019:4233 python2-otopi 1.8.4-1.el7ev RHBA-2019:4233 python2-otopi-devtools 1.8.4-1.el7ev RHBA-2019:4233 python2-ovirt-host-deploy 1.8.4-1.el7ev RHBA-2019:4233 rhvm-appliance 4.3-20191127.0.el7 RHBA-2019:4232 vdsm 4.30.38-1.el7ev RHBA-2019:4230 vdsm-api 4.30.38-1.el7ev RHBA-2019:4230 vdsm-client 4.30.38-1.el7ev RHBA-2019:4230 vdsm-common 4.30.38-1.el7ev RHBA-2019:4230 vdsm-gluster 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-checkips 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-cpuflags 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-ethtool-options 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-extra-ipv4-addrs 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-fcoe 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-localdisk 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-macspoof 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-nestedvt 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-openstacknet 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vhostmd 4.30.38-1.el7ev RHBA-2019:4230 vdsm-hook-vmfex-dev 4.30.38-1.el7ev RHBA-2019:4230 vdsm-http 4.30.38-1.el7ev RHBA-2019:4230 vdsm-jsonrpc 4.30.38-1.el7ev RHBA-2019:4230 vdsm-network 4.30.38-1.el7ev RHBA-2019:4230 vdsm-python 4.30.38-1.el7ev RHBA-2019:4230 vdsm-yajsonrpc 4.30.38-1.el7ev RHBA-2019:4230 6.6. Red Hat Virtualization Host 7 Build (RPMs) The following table outlines the packages included in the rhel-7-server-rhvh-4-build-rpms repository. Table 6.6. Red Hat Virtualization Host 7 Build (RPMs) Name Version Advisory imgbased 1.1.13-0.1.el7ev RHBA-2019:4231 ovirt-node-ng-nodectl 4.3.7-0.20191031.0.el7ev RHBA-2019:4231 python-imgbased 1.1.13-0.1.el7ev RHBA-2019:4231 python2-ovirt-node-ng-nodectl 4.3.7-0.20191031.0.el7ev RHBA-2019:4231 redhat-release-virtualization-host 4.3.7-0.el7ev RHBA-2019:4231 redhat-virtualization-host-image-update 4.3.7-20191128.0.el7_7 RHBA-2019:4231 redhat-virtualization-host-image-update-placeholder 4.3.7-0.el7ev RHBA-2019:4231 6.7. Red Hat Virtualization Host 7 (RPMs) The following table outlines the packages included in the rhel-7-server-rhvh-4-rpms repository. Table 6.7. Red Hat Virtualization Host 7 (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.0.32-1.el7ev RHBA-2019:4233 redhat-virtualization-host-image-update 4.3.7-20191128.0.el7_7 RHBA-2019:4231 rhvm-appliance 4.3-20191127.0.el7 RHBA-2019:4232 6.8. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-ppc64le-rpms repository. Table 6.8. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.0.32-1.el8ev RHBA-2019:4233 python2-jmespath 0.9.0-11.el8ost RHBA-2019:4234 python3-jmespath 0.9.0-11.el8ost RHBA-2019:4234 python3-passlib 1.7.0-5.el8ost RHBA-2019:4234 6.9. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-x86_64-rpms repository. Table 6.9. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.0.32-1.el8ev RHBA-2019:4233 python2-jmespath 0.9.0-11.el8ost RHBA-2019:4234 python3-jmespath 0.9.0-11.el8ost RHBA-2019:4234 python3-passlib 1.7.0-5.el8ost RHBA-2019:4234 | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/package_manifest/rhv-4.3.7 |
28.5. Managing Replication Agreements Between IdM Servers | 28.5. Managing Replication Agreements Between IdM Servers Information is shared between the IdM servers and replicas using multi-master replication . What this means is that servers and replicas all receive updates and, therefore, are data masters. The domain information is copied between the servers and replicas using replication . As replicas are added to the domain, mutual replication agreements are automatically created between the replica and the server it is based on. Additional replication agreements can be created between other replicas and servers or the configuration of the replication agreement can be changed using the ipa-replica-manage command. When a replica is created, the replica install script creates two replication agreements: one going from the master server to the replica and one going from the replica to the master server. Figure 28.1. Server and Replica Agreements As more replicas and servers are added to the domain, there can be replicas and servers that have replication agreements to other servers and replicas but not between each other. For example, the first IdM server is Server A. Then, the admin creates Replica B, and the install script creates a Server A => Replica B replication agreement and a Replica B => Server A replication agreement. , the admin creates Replica C based on Server A. The install script creates a Server A => Replica C replication agreement and a Replica C => Server A replication agreement. Replica B and Replica C both have replication agreements with Server A - but they do not have agreements with each other. For data availability, consistency, failover tolerance, and performance, it can be beneficial to create a pair of replication agreements between Replica B and Replica C, even though their data will eventually be replicated over to each other through replication with Server A. 28.5.1. Listing Replication Agreements The ipa-replica-manage command can list all of the servers and replicas in the replication topology, using the list command: After getting the server/replica list, then it is possible to list the replication agreements for the server. These are the other servers/replicas to which the specified server sends updates. 28.5.2. Creating and Removing Replication Agreements Replication agreements are created by connecting one server to another server. If only one server is given, the replication agreements are created between the local host and the specified server. For example: Replication occurs over standard LDAP; to enable SSL, then include the CA certificate for the local host (or the specified server1 ). The CA certificate is then installed in the remote server's certificate database to enable TLS/SSL connections. For example: To remove a replication agreement between specific servers/replicas, use the disconnect command: Using the disconnect command removes that one replication agreement but leaves both the server/replica instances in the overall replication topology. To remove a server entirely from the IdM replication topology, with all its data, (and, functionally, removing it from the IdM domain as a server), use the del command: 28.5.3. Forcing Replication Replication between servers and replicas occurs on a schedule. Although replication is frequent, there can be times when it is necessary to initiate the replication operation manually. For example, if a server is being taken offline for maintenance, it is necessary to flush all of the queued replication changes out of its changelog before taking it down. To initiate a replication update manually, use the force-sync command. The server which receives the update is the local server; the server which sends the updates is specified in the --from option. 28.5.4. Reinitializing IdM Servers When a replica is first created, the database of the master server is copied, completely, over to the replica database. This process is called initialization . If a server/replica is offline for a long period of time or there is some kind of corruption in its database, then the server can be re-initialized, with a fresh and updated set of data. This is done using the re-initialize command. The target server being initialized is the local host. The server or replica from which to pull the data to initialize the local database is specified in the --from option: 28.5.5. Resolving Replication Conflicts Changes - both for IdM domain data and for certificate and key data - are replicated between IdM servers and replicas (and, in similar paths, between IdM and Active Directory servers). Even though replication operations are run continuously, there is a chance that changes can be made on one IdM server at the same time different changes are made to the same entry on a different IdM server. When replication begins to process those entries, the changes collide - this is a replication conflict . Every single directory modify operation is assigned a server-specific change state number (CSN) to track how those modifications are propagated during replication. The CSN also contains a modify timestamp. When there is a replication conflict, the timestamp is checked and the last change wins. Simply accepting the most recent change is effective for resolving conflicts with attribute values. That method is too blunt for some types of operations, however, which affect the directory tree. Some operations, like modrdn, DN changes, or adding or removing parent and child entries, require administrator review before the conflict is resolved. Note Replication conflicts are resolved by editing the entries directory in the LDAP database. When there is a replication conflict, both entries are added to the directory and are assigned a nsds5ReplConflict attribute. This makes it easy to search for entries with a conflict: 28.5.5.1. Solving Naming Conflicts When two entries are added to the IdM domain with the same DN, both entries are added to the directory, but they are renamed to use the nsuniqueid attribute as a naming attribute. For example: Those entries can be searched for and displayed in the IdM CLI, but they cannot be edited or deleted until the conflict is resolved and the DN is updated. To resolve the conflict: Rename the entry using a different naming attribute, and keep the old RDN. For example: Remove the old RDN value of the naming attribute and the conflict marker attribute. For example: Note The unique identifier attribute nsuniqueid cannot be deleted. Rename the entry with the intended attribute-value pair. For example: Setting the value of the deleteoldrdn attribute to 1 deletes the temporary attribute-value pair cn= TempValue . To keep this attribute, set the value of the deleteoldrdn attribute to 0 . 28.5.5.2. Solving Orphan Entry Conflicts When a delete operation is replicated and the consumer server finds that the entry to be deleted has child entries, the conflict resolution procedure creates a glue entry to avoid having orphaned entries in the directory. In the same way, when an add operation is replicated and the consumer server cannot find the parent entry, the conflict resolution procedure creates a glue entry representing the parent so that the new entry is not an orphan entry. Glue entries are temporary entries that include the object classes glue and extensibleObject . Glue entries can be created in several ways: If the conflict resolution procedure finds a deleted entry with a matching unique identifier, the glue entry is a resurrection of that entry, with the addition of the glue object class and the nsds5ReplConflict attribute. In such cases, either modify the glue entry to remove the glue object class and the nsds5ReplConflict attribute to keep the entry as a normal entry or delete the glue entry and its child entries. The server creates a minimalistic entry with the glue and extensibleObject object classes. In such cases, modify the entry to turn it into a meaningful entry or delete it and all of its child entries. | [
"ipa-replica-manage list srv1.example.com srv2.example.com srv3.example.com srv4.example.com",
"ipa-replica-manage list srv1.example.com srv2.example.com srv3.example.com",
"ipa-replica-manage connect server1 server2",
"ipa-replica-manage connect srv2.example.com srv4.example.com",
"ipa-replica-manage connect --cacert=/etc/ipa/ca.crt srv2.example.com srv4.example.com",
"ipa-replica-manage disconnect srv2.example.com srv4.example.com",
"ipa-replica-manage del srv2.example.com",
"ipa-replica-manage force-sync --from srv1.example.com",
"ipa-replica-manage re-initialize --from srv1.example.com",
"ldapsearch -x -D \"cn=directory manager\" -w password -b \"dc=example,dc=com\" \"nsds5ReplConflict=*\" \\* nsds5ReplConflict",
"nsuniqueid=0a950601-435311e0-86a2f5bd-3cd26022+uid=jsmith,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -x -D \"cn=directory manager\" -w secret -h ipaserver.example.com -p 389 dn: nsuniqueid=66446001-1dd211b2+uid=jsmith,cn=users,cn=accounts,dc=example,dc=com changetype: modrdn newrdn: cn= TempValue deleteoldrdn: 0",
"ldapmodify -x -D \"cn=directory manager\" -w secret -h ipaserver.example.com -p 389 dn: cn= TempValue ,cn=users,cn=accounts,dc=example,dc=com changetype: modify delete: uid uid: jsmith - delete: nsds5ReplConflict -",
"ldapmodify -x -D \"cn=directory manager\" -w secret -h ipaserver.example.com -p 389 dn: cn= TempValue ,dc=example,dc=com changetype: modrdn newrdn: uid=jsmith deleteoldrdn: 1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/ipa-replica-manage |
A.5. Explanation of Settings in the New Template Window | A.5. Explanation of Settings in the New Template Window The following table details the settings for the New Template window. Note The following tables do not include information on whether a power cycle is required because that information is not applicable to this scenario. New Template Settings Field Description/Action Name The name of the template. This is the name by which the template is listed in the Templates tab in the Administration Portal and is accessed via the REST API. This text field has a 40-character limit and must be a unique name within the data center with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. The name can be reused in different data centers in the environment. Description A description of the template. This field is recommended but not mandatory. Comment A field for adding plain text, human-readable comments regarding the template. Cluster The cluster with which the template is associated. This is the same as the original virtual machines by default. You can select any cluster in the data center. CPU Profile The CPU profile assigned to the template. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined on the cluster level based on quality of service entries created for data centers. Create as a Template Sub-Version Specifies whether the template is created as a new version of an existing template. Select this check box to access the settings for configuring this option. Root Template : The template under which the sub-template is added. Sub-Version Name : The name of the template. This is the name by which the template is accessed when creating a new virtual machine based on the template. If the virtual machine is stateless, the list of sub-versions will contain a latest option rather than the name of the latest sub-version. This option automatically applies the latest template sub-version to the virtual machine upon reboot. Sub-versions are particularly useful when working with pools of stateless virtual machines. Disks Allocation Alias - An alias for the virtual disk used by the template. By default, the alias is set to the same value as that of the source virtual machine. Virtual Size - The total amount of disk space that a virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. This value corresponds with the size, in GB, that was specified when the disk was created or edited. Format - The format of the virtual disk used by the template. The available options are QCOW2 and Raw. By default, the format is set to Raw. Target - The storage domain on which the virtual disk used by the template is stored. By default, the storage domain is set to the same value as that of the source virtual machine. You can select any storage domain in the cluster. Disk Profile - The disk profile to assign to the virtual disk used by the template. Disk profiles are created based on storage profiles defined in the data centers. For more information, see Creating a Disk Profile . Allow all users to access this Template Specifies whether a template is public or private. A public template can be accessed by all users, whereas a private template can only be accessed by users with the TemplateAdmin or SuperUser roles. Copy VM permissions Copies explicit permissions that have been set on the source virtual machine to the template. Seal Template (Linux only) Specifies whether a template is sealed. 'Sealing' is an operation that erases all machine-specific configurations from a filesystem, including SSH keys, UDEV rules, MAC addresses, system ID, and hostname. This setting prevents a virtual machine based on this template from inheriting the configuration of the source virtual machine. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Explanation_of_Settings_in_the_New_Template_and_Edit_Template_Windows |
Chapter 2. Configuring the core RDMA subsystem | Chapter 2. Configuring the core RDMA subsystem The rdma service configuration manages the network protocols and communication standards such as InfiniBand, iWARP, and RoCE. Procedure Install the rdma-core package: Verification Install the libibverbs-utils and infiniband-diags packages: List the available InfiniBand devices: Display the information of the mlx4_1 device: Display the status of the mlx4_1 device: The ibping utility pings an InfiniBand address and runs as a client/server by configuring the parameters. Start server mode -S on port number -P with -C InfiniBand channel adapter (CA) name on the host: Start client mode, send some packets -c on port number -P by using -C InfiniBand channel adapter (CA) name with -L Local Identifier (LID) on the host: | [
"dnf install rdma-core",
"dnf install libibverbs-utils infiniband-diags",
"ibv_devices device node GUID ------ ---------------- mlx4_0 0002c903003178f0 mlx4_1 f4521403007bcba0",
"ibv_devinfo -d mlx4_1 hca_id: mlx4_1 transport: InfiniBand (0) fw_ver: 2.30.8000 node_guid: f452:1403:007b:cba0 sys_image_guid: f452:1403:007b:cba3 vendor_id: 0x02c9 vendor_part_id: 4099 hw_ver: 0x0 board_id: MT_1090120019 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 2048 (4) sm_lid: 2 port_lid: 2 port_lmc: 0x01 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet",
"ibstat mlx4_1 CA 'mlx4_1' CA type: MT4099 Number of ports: 2 Firmware version: 2.30.8000 Hardware version: 0 Node GUID: 0xf4521403007bcba0 System image GUID: 0xf4521403007bcba3 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 2 LMC: 1 SM lid: 2 Capability mask: 0x0251486a Port GUID: 0xf4521403007bcba1 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0xf65214fffe7bcba2 Link layer: Ethernet",
"ibping -S -C mlx4_1 -P 1",
"ibping -c 50 -C mlx4_0 -P 1 -L 2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_infiniband_and_rdma_networks/configuring-the-core-rdma-subsystem_configuring-infiniband-and-rdma-networks |
Appendix B. Using Red Hat Enterprise Linux packages | Appendix B. Using Red Hat Enterprise Linux packages This section describes how to use software delivered as RPM packages for Red Hat Enterprise Linux. To ensure the RPM packages for this product are available, you must first register your system . B.1. Overview A component such as a library or server often has multiple packages associated with it. You do not have to install them all. You can install only the ones you need. The primary package typically has the simplest name, without additional qualifiers. This package provides all the required interfaces for using the component at program run time. Packages with names ending in -devel contain headers for C and C++ libraries. These are required at compile time to build programs that depend on this package. Packages with names ending in -docs contain documentation and example programs for the component. For more information about using RPM packages, see one of the following resources: Red Hat Enterprise Linux 7 - Installing and managing software Red Hat Enterprise Linux 8 - Managing software packages B.2. Searching for packages To search for packages, use the yum search command. The search results include package names, which you can use as the value for <package> in the other commands listed in this section. USD yum search <keyword>... B.3. Installing packages To install packages, use the yum install command. USD sudo yum install <package>... B.4. Querying package information To list the packages installed in your system, use the rpm -qa command. USD rpm -qa To get information about a particular package, use the rpm -qi command. USD rpm -qi <package> To list all the files associated with a package, use the rpm -ql command. USD rpm -ql <package> | [
"yum search <keyword>",
"sudo yum install <package>",
"rpm -qa",
"rpm -qi <package>",
"rpm -ql <package>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/using_red_hat_enterprise_linux_packages |
C.23. XSiteAdmin | C.23. XSiteAdmin org.infinispan.xsite.XSiteAdminOperations The XSiteAdmin component exposes tooling for backing up data to remote sites. Table C.36. Operations Name Description Signature bringSiteOnline Brings the given site back online on all the cluster. String bringSiteOnline(String p0) amendTakeOffline Amends the values for 'TakeOffline' functionality on all the nodes in the cluster. String amendTakeOffline(String p0, int p1, long p2) getTakeOfflineAfterFailures Returns the value of the 'afterFailures' for the 'TakeOffline' functionality. String getTakeOfflineAfterFailures(String p0) getTakeOfflineMinTimeToWait Returns the value of the 'minTimeToWait' for the 'TakeOffline' functionality. String getTakeOfflineMinTimeToWait(String p0) setTakeOfflineAfterFailures Amends the values for 'afterFailures' for the 'TakeOffline' functionality on all the nodes in the cluster. String setTakeOfflineAfterFailures(String p0, int p1) setTakeOfflineMinTimeToWait Amends the values for 'minTimeToWait' for the 'TakeOffline' functionality on all the nodes in the cluster. String setTakeOfflineMinTimeToWait(String p0, long p1) siteStatus Check whether the given backup site is offline or not. String siteStatus(String p0) status Returns the status(offline/online) of all the configured backup sites. String status() takeSiteOffline Takes this site offline in all nodes in the cluster. String takeSiteOffline(String p0) pushState Starts the cross-site state transfer to the site name specified. String pushState(String p0) cancelPushState Cancels the cross-site state transfer to the site name specified. String cancelPushState(String p0) getSendingSiteName Returns the site name that is pushing state to this site. String getSendingSiteName() cancelReceiveState Restores the site to the normal state. It is used when the link between the sites is broken during the state transfer. String cancelReceiveState(String p0) getPushStateStatus Returns the status of completed and running cross-site state transfer. String getPushStateStatus() clearPushStateStatus Clears the status of completed cross-site state transfer. String clearPushStateStatus() 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/xsiteadmin |
4.2. Keepalived Direct Routing Configuration | 4.2. Keepalived Direct Routing Configuration Direct Routing configuration of Keepalived is similar in configuration with NAT. In the following example, Keepalived is configured to provide load balancing for a group of real servers running HTTP on port 80. To configure Direct Routing, change the lb_kind parameter to DR . Other configuration options are discussed in Section 4.1, "A Basic Keepalived configuration" . The following example shows the keepalived.conf file for the active server in a Keepalived configuration that uses direct routing. The following example shows the keepalived.conf file for the backup server in a Keepalived configuration that uses direct routing. Note that the state and priority values differ from the keepalived.conf file in the active server. | [
"global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 } vrrp_instance RH_1 { state MASTER interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 172.31.0.1 } } virtual_server 172.31.0.1 80 { delay_loop 10 lb_algo rr lb_kind DR persistence_timeout 9600 protocol TCP real_server 192.168.0.1 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.2 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.3 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } }",
"global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 } vrrp_instance RH_1 { state BACKUP interface eth0 virtual_router_id 50 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 172.31.0.1 } } virtual_server 172.31.0.1 80 { delay_loop 10 lb_algo rr lb_kind DR persistence_timeout 9600 protocol TCP real_server 192.168.0.1 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.2 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.3 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } }"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-initial-setup-conf-DR-VSA |
Chapter 1. Release notes for Red Hat build of Quarkus 3.2 | Chapter 1. Release notes for Red Hat build of Quarkus 3.2 Release notes provide information about new features, notable technical changes, features in technology preview, bug fixes, known issues, and related advisories for Red Hat build of Quarkus 3.2. These include the following notable changes: Jakarta EE 10 integration Eclipse MicroProfile 6 integration Hibernate ORM upgraded to version 6.2 Quarkus CLI enhancements for building and pushing container images Deprecation of Red Hat build of Quarkus support for Java 11 Information about upgrading and backward compatibility is also provided to help you make the transition from an earlier release. 1.1. About Red Hat build of Quarkus Red Hat build of Quarkus is a Kubernetes-native Java stack optimized for containers and Red Hat OpenShift Container Platform. Quarkus is designed to work with popular Java standards, frameworks, and libraries such as Eclipse MicroProfile, Eclipse Vert.x, Apache Camel, Apache Kafka, Hibernate ORM with Jakarta Persistence, and RESTEasy Reactive (Jakarta REST). As a developer, you can choose the Java frameworks you want for your Java applications, which you can run in Java Virtual Machine (JVM) mode or compile and run in native mode. Quarkus provides a container-first approach to building Java applications. The container-first approach facilitates the containerization and efficient execution of microservices and functions. For this reason, Quarkus applications have a smaller memory footprint and faster startup times. Quarkus also optimizes the application development process with capabilities such as unified configuration, automatic provisioning of unconfigured services, live coding, and continuous testing that gives you instant feedback on your code changes. 1.2. Differences between the Red Hat build of Quarkus community version and Red Hat build of Quarkus As an application developer, you can access two different versions of Quarkus: the Quarkus community version and the productized version, Red Hat build of Quarkus. The following table describes the differences between the Quarkus community version and Red Hat build of Quarkus. Feature Quarkus community version Red Hat build of Quarkus version Description Access to the latest community features Yes No With the Quarkus community version, you can access the latest feature developments. Red Hat does not release Red Hat build of Quarkus to correspond with every version that the community releases. The cadence of Red Hat build of Quarkus feature releases is approximately every six months. Enterprise support from Red Hat No Yes Red Hat provides enterprise support for Red Hat build of Quarkus only. To report issues about the Quarkus community version, see quarkusio/quarkus - Issues . Access to long-term support No Yes Each feature release of Red Hat build of Quarkus is fully supported for approximately one year up until the feature release. When a feature release is superseded by a new version, Red Hat continues to provide a further six months of maintenance support. For more information, see Support and compatibility . Common Vulnerabilities and Exposures (CVE) fixes and bug fixes backported to earlier releases No Yes With Red Hat build of Quarkus, selected CVE fixes and bug fixes are regularly backported to supported streams. In the Quarkus community version, CVEs, and bug fixes are typically made available in the latest release only. Tested and verified with Red Hat OpenShift Container Platform and Red Hat Enterprise Linux (RHEL) No Yes Red Hat build of Quarkus is built, tested, and verified with Red Hat OpenShift Container Platform and RHEL. Red Hat provides both production and development support for supported configurations and tested integrations according to your subscription agreement. For more information, see Red Hat build of Quarkus Supported configurations . Built from source using secure build systems No Yes In Red Hat build of Quarkus, the core platform and all supported extensions are provided by Red Hat using secure software delivery, which means that they are built from source, scanned for security issues, and with verified license usage. Access to support for JDK and Red Hat build of Quarkus Native builder distribution No Yes Red Hat build of Quarkus supports certified OpenJDK builds and certified native executable builders. See admonition below. For more information, see Supported configurations . Important Red Hat build of Quarkus supports the building of native Linux executables by using a Red Hat build of Quarkus Native builder image, which is based on Mandrel and distributed by Red Hat. For more information, see Compiling your Quarkus applications to native executables . Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 1.3. New features, enhancements, and technical changes This section provides an overview of the new features, enhancements, and technical changes introduced in Red Hat build of Quarkus 3.2. 1.3.1. Cloud 1.3.1.1. Cached section capabilities introduced in the Qute templating engine In Red Hat build of Quarkus 3.2, the Qute templating engine is enhanced to provide the ability to cache those parts of a template that rarely change, which can help increase efficiency. To use the cached sections feature, use the quarkus-cache extension, where CacheSectionHelper is registered and configured automatically. For more information, see the Cached section part of the "Qute reference" guide. 1.3.1.2. Kubernetes client upgraded to version 6.7.2 The Kubernetes client included with Red Hat build of Quarkus has been upgraded from version 5.12 to 6.7.2. This upgrade offers enhanced features and improved support for developing cloud-native applications. For more information, see the Kubernetes client - Migration from 5.x to 6.x guide. 1.3.2. Core 1.3.2.1. Build-time analytics (user telemetry) support Red Hat build of Quarkus 3.2 introduces a build-time analytics feature. This feature provides usage information about Red Hat build of Quarkus during the application's build time, but not during its run time. The usage analytics report provides anonymous information, such as which operating systems, JAVA versions, build systems, and extensions are used. Usage analytics can help Red Hat better understand how Red Hat build of Quarkus is used and how it can be improved. To opt-in, run Red Hat build of Quarkus in dev mode. The first time you do so, you are asked if you want to opt-in to contributing anonymous build-time data to the Quarkus community. This data will NOT be collected when you run a Red Hat build of Quarkus application in, for example, a production environment. For more information, see the Quarkus usage analytics guide in the Quarkus community. For more information about what data is collected, see the Telemetry data collection notice . 1.3.2.2. Infinispan annotation caching support The Red Hat build of Quarkus Infinispan extension now supports the declarative caching API, allowing annotation-based caching control in CDI-managed beans. 1.3.2.3. Management Network interface integration The Management Network interface is a dedicated channel for managing and monitoring your applications, including providing endpoints for various management tasks such as health checks and metrics. 1.3.2.4. Most of the quarkus-cache configurations are now runtime Most of the quarkus-cache extension configuration has been made runtime, allowing you to define the cache configuration at application startup. Certain configuration properties can be changed at runtime through API calls. 1.3.2.5. Multiple SMTP mailer support Some applications require that emails be sent through different SMTP servers. In Red Hat build of Quarkus 3.2, you can now configure several mailers and send emails by using multiple SMTP servers. For more information, see the Multiple mailer configuration section of the "Mailer reference" guide. 1.3.2.6. Revamp of the development UI Red Hat build of Quarkus 3.2 introduces significant changes and enhancements to the development UI, including a graphical interface for streamlined management and monitoring of application components during development. This aids in efficient log navigation, metrics tracking, and endpoint management. 1.3.2.7. Scheduler programmatic API With the Red Hat build of Quarkus 3.2 release, you can schedule jobs programmatically by using the new Scheduler programmatic API. To schedule a job programmatically, you inject io.quarkus.scheduler.Scheduler . You can also remove jobs that are scheduled programmatically. For more information, see the Programmatic scheduling section of the Quarkus "Scheduler reference" guide. 1.3.2.8. Update tool integration The Red Hat build of Quarkus update tool simplifies the upgrade of your applications by automatically updating project dependencies, configurations, and code to match the latest Red Hat build of Quarkus version. It streamlines the migration process, ensuring compatibility and reducing the effort required to stay up-to-date. To use the tool, run the quarkus update command in your project directory, following the interactive prompts to update your application. Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. For more information, see the Migrating applications to Red Hat build of Quarkus version 3.2 guide. 1.3.3. Data 1.3.3.1. Hibernate ORM extension now incorporates automated IN clause parameter padding With this 3.2 release, the Hibernate Object-Relational Mapping (ORM) extension has been changed to incorporate automatic IN clause parameter padding as a default setting. This improvement augments the caching efficiency for queries that incorporate IN clauses. To revert to the functionality and deactivate this feature, you can set the property value of quarkus.hibernate-orm.query.in-clause-parameter-padding to false . 1.3.3.2. Hibernate ORM upgraded to version 6.2 Red Hat build of Quarkus now includes and supports Hibernate ORM version 6.2, therefore significantly upgrading the main persistence layer. Hibernate ORM 6.2 brings many improvements and new features compared with version 5.6, but also some breaking changes. For more information, see the following resources: Changes that affect compatibility with earlier versions Quarkus Migration Guide 3.0: Hibernate ORM 5 to 6 migration guide Quarkus Using Hibernate ORM and Jakarta Persistence guide 1.3.3.3. Hibernate Search upgraded to version 6.2 In Red Hat build of Quarkus 3.2, Hibernate Search has been upgraded to version 6.2. Hibernate Search offers indexing and full-text search capabilities to your Red Hat build of Quarkus applications. Version 6.2 introduces enhancements, new features, and some notable changes to how Red Hat build of Quarkus applications handle default values for geo-point fields. For more details, see Changes that affect compatibility with earlier versions . To learn more about what is new in Hibernate Search, see the Hibernate Search release notes. 1.3.3.4. Oracle JDBC driver upgraded to version 23.2.0.0 The Oracle JDBC driver has been upgraded to version 23.2. Customers using Oracle DB should note that older versions of the Oracle JDBC driver are not necessarily compatible with the latest Oracle DB release. 1.3.3.5. Reactive datasources now support CredentialsProvider values Reactive datasources can now modify CredentialsProvider values, enhancing security and configurability. This allows real-time credential updates for authentication, ensuring data access security while maintaining application availability and minimizing operational disruptions. 1.3.4. Native 1.3.4.1. Red Had build of Red Hat build of Quarkus Native builder upgraded to version 23 Besides improved performance, version 23 brings improved generation of debug information, extended support for Java Flight Recorder (JFR) events, and experimental support for JFR event streams. It also introduces experimental support for Java Management Extensions (JMX). Environment variables must now be passed to Mandrel through the new native-image option -E<env-var-key>[=<env-var-value>] . Red Hat build of Quarkus Native builder now defaults to targeting x86-64-v3 , the processor-specific application binary interface (psABI) on the AMD64 architecture, and introduces support for a new -march option for compiling to a more compatible native image for older architectures. For more information, see the Work around missing CPU features article in the Red Hat build of Quarkus community "Native reference" guide. 1.3.5. Observability 1.3.5.1. OpenTelemetry SDK autoconfiguration Red Hat build of Quarkus introduces OpenTelemetry SDK autoconfiguration, simplifying the integration of distributed tracing and observability. It automates OpenTelemetry SDK setup based on Red Hat build of Quarkus extensions, eliminating manual configuration and optimizing trace and metric collection. 1.3.6. Security 1.3.6.1. Custom claim types in test dependencies now supported In Red Hat build of Quarkus 3.2, the quarkus-test-security-jwt and quarkus-test-security-oidc test dependencies are enhanced to support custom claim types. With this update, you can improve the test coverage of applications that use custom JWT token claims. 1.3.6.2. OpenID Connect (OIDC) Front-channel Logout now supported The inclusion of OIDC front-channel logout support in Red Hat build of Quarkus complements the already-supported OIDC back-channel logout , enabling the logout of users across multiple services in a distributed environment. 1.3.6.3. OpenID Connect token verification customization Within Red Hat build of Quarkus 3.2, the option to tailor the OIDC token verification process is available. This customization permits the preprocessing of legacy token headers, commonly issued by OIDC providers like Microsoft Azure, prior to signature validation. 1.3.6.4. Security annotations can be used as meta-annotations You can combine @TestSecurity and @JwtSecurity in a meta-annotation; for example: @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) @TestSecurity(user = "userOidc", roles = "viewer") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = "email", value = "[email protected]") } ) public @interface TestSecurityMetaAnnotation { } This combination is useful if the same set of security settings are required in multiple test methods. 1.3.6.5. Simplified OIDC multitenancy resolution for static tenants In an OIDC multitenancy setup where you set multiple tenant configurations in the application.properties file, you must specify how the tenant identifier gets resolved by registering the TenantResolver interface implementation. Red Hat build of Quarkus 3.2 introduces a convention-based static tenant resolution, where the last path segment of the current HTTP request URL is used as a tenant identifier. For example, if the request URL ends with /keycloak , then a static tenant configuration whose tenant ID is keycloak is selected. By using this option, you can reduce boilerplate code in simple multitenant configurations. For more information, see the Configuring the application section of the Quarkus "Using OpenID Connect (OIDC) multitenancy" guide. 1.3.6.6. SmallRye configuration properties expansion in @RolesAllowed The Red Hat build of Quarkus @RolesAllowed annotation supports dynamic role names through configuration properties, enhancing access control. This annotation restricts access based on SecurityIdentity (user roles), offering adaptable and configurable access control without code changes. 1.3.7. Standards 1.3.7.1. Eclipse MicroProfile 6 integration Red Hat build of Quarkus 3.2 introduces integration of Eclipse MicroProfile 6 , which enhances microservice development with up-to-date specifications for improved observability, OpenAPI, and JWT. 1.3.7.2. Jakarta EE 10 integration Red Hat build of Quarkus 3.2 introduces the integration of Jakarta EE 10, which provides developers with access to the current APIs and specifications. 1.3.8. Tooling 1.3.8.1. Apache Maven version 3.9 supported Red Hat build of Quarkus 3.2 adds support for Maven version 3.9 so that developers can use the latest Maven features. Maven version 3.8.6 or later remains supported. 1.3.8.2. Deploy tool integration The quarkus deploy command in Quarkus facilitates deploying applications to various cloud platforms, containers, and serverless environments. It generates optimized container images and adapts the application to the target platform, ensuring efficient and reliable deployment. To use the tool, run quarkus deploy followed by the desired deployment target and configuration options, allowing for seamless application deployment without manual configuration. 1.3.8.3. Red Hat build of Quarkus CLI enhancements for building and pushing container images In Red Hat build of Quarkus 3.2, it is now easier to build and push container images. For more information, see the Container images section of the "Building Red Hat build of Quarkus apps with the quarkus command line interface" guide. 1.3.8.3.1. Building a container image For example, you no longer need to adjust your pom.xml project configuration to build a docker image to add or remove container image extensions. Instead, you only need to run the following command: quarkus image build docker 1.3.8.3.2. Pushing a container image The image push command is similar to the image build command and provides some basic options to push images to a target container registry. quarkus image push --registry=<image registry> --registry-username=<registry username> --registry-password-stdin Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. For a detailed list of the Red Hat build of Quarkus CLI image commands and how to use them, see the following resources: Building Red Hat build of Quarkus apps with the Red Hat build of Quarkus CLI Blog: Dev productivity - Red Hat build of Quarkus CLI 1.3.9. Web 1.3.9.1. Federation support for SmallRye GraphQL Quarkus' SmallRye GraphQL now supports Apollo Federation 2 subgraph exposure, enabling federated GraphQL schema creation. This empowers unified GraphQL APIs by aggregating data from independently deployed GraphQL services, simplifying complex application development. 1.3.9.2. Filtering by named queries in REST Data with the Panache extension Filtering by named queries in Red Hat build of Quarkus' REST Data with Panache extension streamlines data retrieval by applying predefined queries to REST endpoints, enhancing performance and code maintainability for efficient database interaction through REST APIs. When listing entities, you can filter by a named query defined in your entity by the @NamedQuery annotation. An example of the named query @Entity @NamedQuery(name = "Person.containsInName", query = "from Person where name like CONCAT('%', CONCAT(:name, '%'))") public class Person extends PanacheEntity { String name; } , you can set a query parameter namedQuery when listing the entities using the generated resource. Use the name of the desired query, such as calling http://localhost:8080/people?namedQuery=Person.containsInName&name=ter , which would retrieve all persons with names containing "ter". 1.3.9.3. gRPC exception handling The gRPC exception handling facilitates more robust error management in gRPC services, enhancing application reliability and debugging of gRPC-based applications. This feature enables passing the error message as a trailer. The gRPC client will receive a specific error message from the server in case of issues rather than a generic "server exception." 1.3.9.4. gRPC extension migration to Vert.x gRPC The migration of the gRPC extension to Vert.x's implementation enhances alignment with the Vert.x ecosystem, offering an efficient way to develop a microservice by using gRPC communication. This implementation allows a single HTTP server configuration that removes duplicity from your Red Hat build of Quarkus Security configuration. 1.3.9.5. Programmatic API to create Reactive REST clients In releases, you could only create Reactive REST clients by configuring them in the application.properties file. This approach might have been problematic if you wanted to create dynamic clients. With Red Hat build of Quarkus 3.2, you can now create Reactive REST clients programmatically by using the new Quarkus-specific API, QuarkusRestClientBuilder . The QuarkusRestClientBuilder interface programmatically creates Reactive REST clients with additional configuration options. For more information, see the Programmatic client creation with QuarkusRestClientBuilder section of the Quarkus "Using the REST Client" guide. 1.3.9.6. RESTEasy Reactive HTTP response headers and status codes can be customized In Red Hat build of Quarkus 3.2, the RESTEasy Reactive client is enhanced to provide more flexibility when streaming responses. With this update, you can customize HTTP headers, HTTP responses, and status codes. For more information, see the Customizing headers and status section of the Quarkus "Writing REST services with RESTEasy Reactive" guide. 1.3.9.7. The @Encoded annotation on REST Client Reactive is now supported Red Hat build of Quarkus 3.2 introduces support for the @Encoded annotation on REST Client Reactive. With this update, the @Encoded annotation impacts the decoding of parameters, such as the PATH and QUERY parameters. For more information, see the following resources: Jakarta EE Platform API - Annotation Type Encoded Quarkus Using the REST Client guide 1.4. Support and compatibility You can find detailed information about the supported configurations and artifacts that are compatible with Red Hat build of Quarkus 3.2 and the high-level support lifecycle policy on the Red Hat Customer Support portal as follows: For a list of supported configurations, OpenJDK versions, and tested integrations, see Red Hat build of Quarkus Supported configurations . For a list of the supported Maven artifacts, extensions, and BOMs for Red Hat build of Quarkus, see Red Hat build of Quarkus Component details . For general availability, full support, and maintenance support dates for all Red Hat products, see Red Hat Application Services Product Update and Support Policy . 1.4.1. Product updates and support lifecycle policy In Red Hat build of Quarkus, a feature release can be either a major or a minor release that introduces new features or support. Red Hat build of Quarkus release version numbers are directly aligned with the Long-Term Support (LTS) versions of the Quarkus community project . The version numbering of a Red Hat build of Quarkus feature release matches the Quarkus community version that it is based on. For more information, see the Long-Term Support (LTS) for Quarkus blog post. Important Red Hat does not release a productized version of Quarkus for every version the community releases. The cadence of the Red Hat build of Quarkus feature releases is about every six months. Red Hat build of Quarkus provides full support for a feature release right up until the release of a subsequent version. When a feature release is superseded by a new version, Red Hat continues to provide a further six months of maintenance support for the release, as outlined in the following support lifecycle chart [Fig. 1]. Figure 1. Feature release cadence and support lifecycle of Red Hat build of Quarkus During the full support phase and maintenance support phase of a release, Red Hat also provides 'service-pack (SP)' updates and 'micro' releases to fix bugs and Common Vulnerabilities and Exposures (CVE). New features in subsequent feature releases of Red Hat build of Quarkus can introduce enhancements, innovations, and changes to dependencies in the underlying technologies or platforms. For a detailed summary of what is new or changed in a successive feature release, see New features, enhancements, and technical changes . While most of the features of Red Hat build of Quarkus continue to work as expected after you upgrade to the latest release, there might be some specific scenarios where you need to change your existing applications or do some extra configuration to your environment or dependencies. Therefore, before upgrading Red Hat build of Quarkus to the latest release, always review the Changes that affect compatibility with earlier versions and Deprecated components and features sections of the release notes. 1.4.2. Tested and verified environments Red Hat build of Quarkus 3.2 is available on the following versions of Red Hat OpenShift Container Platform: 4.15, 4.12, and Red Hat Enterprise Linux 8.8. For a list of supported configurations, log in to the Red Hat Customer Portal and see the Knowledgebase solution Red Hat build of Quarkus Supported configurations . 1.4.3. Development support Red Hat provides development support for the following Red Hat build of Quarkus features, plugins, extensions, and dependencies: Features Continuous Testing Dev Services Dev UI Local development mode Remote development mode Plugins Maven Protocol Buffers Plugin 1.4.3.1. Development tools Red Hat provides development support for using Quarkus development tools, including the Quarkus CLI and the Maven and Gradle plugins, to prototype, develop, test, and deploy Red Hat build of Quarkus applications. Red Hat does not support using Quarkus development tools in production environments. For more information, see the Red Hat Knowledgebase article Development Support Scope of Coverage . 1.5. Deprecated components and features The components and features listed in this section are deprecated with Red Hat build of Quarkus 3.2. They are included and supported in this release. However, no enhancements will be made to these components and features, and they might be removed in the future. For a list of the components and features that are deprecated in this release, log in to the Red Hat Customer Portal and view the Red Hat build of Quarkus Component details page. 1.5.1. Deprecation of Red Hat build of Quarkus support for Java 11 In Red Hat build of Quarkus 3.2, support for Java 11 is deprecated and is planned to be removed in a future release. Although Red Hat build of Quarkus 3.2 still supports Java 11 as the minimal version, start using Java 17 instead. 1.6. Technology Previews This section lists features and extensions that are now available as a Technology Preview in Red Hat build of Quarkus 3.2. Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat recommends that you do not use them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about Red Hat Technology Preview features, see Technology Preview Features Scope . 1.6.1. Enhanced component testing Red Hat build of Quarkus 3.2 introduces a JUnit extension named QuarkusComponentTestExtension as a Technology Preview feature. This new extension aims to help ease testing of CDI components and mocking of their dependencies and is available in the quarkus-junit5-component dependency. For more information, see the Testing components section of the Red Hat build of Quarkus "Testing your application" guide. 1.6.2. Hibernate Reactive upgraded to version 2 With this 3.2 release, Red Hat build of Quarkus depends on the Hibernate Reactive 2 extension instead of Hibernate Reactive 1. This change implies several changes in behavior and database schema expectations that are incompatible with earlier versions. Most of the changes are related to Hibernate Reactive 2 depending on Hibernate ORM 6.2 instead of Hibernate ORM 5.6. 1.6.3. quarkus-opentelemetry-exporter-otlp merged into quarkus-opentelemetry The quarkus-opentelemetry-exporter-otlp extension is part of the quarkus-opentelemetry extension. This unified extension provides OpenTelemetry Protocol (OTLP) exporter functionality without additional setup, streamlining OTLP exporter usage. 1.6.4. Support for storing transaction logs in a database With Red Hat build of Quarkus 3.2, for cloud environments where persistent storage is unavailable, such as when application containers cannot use persistent volumes, you can configure the transaction management to store transaction logs in a database by using a Java Database Connectivity (JDBC) datasource. Important This configuration is only relevant for Jakarta Transactions transactions. While there are several benefits to using a database to store transaction logs, you might notice a reduction in performance compared with using the file system to store the logs. However, in cloud-native apps, it is important to assess transactions after careful evaluation. The narayana-jta extension, which manages these transactions, requires stable storage, a unique reusable node identifier, and a steady IP address to work correctly. While the JDBC object store provides stable storage, users must still plan how to meet the other two requirements. To store transaction logs by using a JDBC datasource, configure the quarkus.transacion-manager.object-store.<property> properties, where <property> can be any of the following options: type ( string ): Configure this property to jdbc to enable usage of a Red Hat build of Quarkus JDBC datasource for storing transaction logs. The default value is file-system . datasource ( string ): Specify the name of the datasource for the transaction log storage. If no value is provided for the datasource property, Red Hat build of Quarkus uses the default datasource. create-table ( boolean ): When set to true , the transaction log table gets automatically created if it does not already exist. The default value is false . drop-table ( boolean ): When set to true , the tables are dropped on startup if they already exist. The default value is false . table-prefix (string): Specify the prefix for a related table name. The default value is quarkus_ . Also consider the following points: You can manually create the transaction log table during the initial setup by setting the create-table property to true . JDBC data sources and ActiveMQ Artemis allow the enlistment and automatic registration of XAResourceRecovery instances. However, be aware that the following points are not included in Red Hat build of Quarkus's support for storing transaction logs in a database: JDBC datasources is part of the quarkus-agroal extension and requires that the following application property is set as shown: quarkus.datasource.jdbc.transactions=XA . ActiveMQ Artemis (community client) is part of quarkus-pooled-jms extension and requires that the following application property is set as shown: quarkus.pooled-jms.transaction=XA . For more information, see CEQ-4878 . To ensure data protection in case of application crashes or failures, enable the transaction crash recovery with the quarkus.transaction-manager.enable-recovery=true configuration. Note To work around the current known issue of Agroal having a different view on running transaction checks , set the datasource transaction type for the datasource responsible for writing the transaction logs to disabled : This example uses TX_LOG as the datasource name. 1.7. Changes that affect compatibility with earlier versions This section describes changes in Red Hat build of Quarkus 3.2 that affect the compatibility of applications built with earlier product versions. Review these breaking changes and take the steps required to ensure that your applications continue functioning after you update them to Red Hat build of Quarkus 3.2. To automate many of these changes, use the quarkus update command to update your projects to the latest Red Hat build of Quarkus version . 1.7.1. Cloud 1.7.1.1. Upgrade to the Kubernetes client that is included with Red Hat build of Quarkus The Kubernetes Client has been upgraded from 5.12 to 6.7.2. For more information, see the Kubernetes Client - Migration from 5.x to 6.x guide. 1.7.1.2. Improved logic for generating TLS-based container ports Red Hat build of Quarkus 3.2 introduces changes in how the Kubernetes extension generates TLS-based container ports. Earlier versions automatically added a container port named https to generated deployment resources. This approach posed problems, especially when SSL/TLS was not configured, rendering the port non-functional. In 3.2 and later, the Kubernetes extension does not add a container port named https by default. The container port is only added if you take the following steps: You specify any relevant quarkus.http.ssl.* properties in your application.properties file. You set quarkus.kubernetes.ports.https.tls=true in your application.properties file. 1.7.1.3. Removal of some Kubernetes and OpenShift properties With this 3.2 release, some previously deprecated Kubernetes and OpenShift-related properties have been removed. Replace them with their new counterparts. Table 1.1. Removed properties and their new counterparts Removed property New property quarkus.kubernetes.expose quarkus.kubernetes.ingress.expose quarkus.openshift.expose quarkus.openshift.route.expose quarkus.kubernetes.host quarkus.kubernetes.ingress.host quarkus.openshift.host quarkus.openshift.route.host quarkus.kubernetes.group quarkus.kubernetes.part-of quarkus.openshift.group quarkus.openshift.part-of Additionally, with this release, properties without the quarkus. prefix are ignored. For example, before this release, if you added a kubernetes.name property, it was mapped to quarkus.kubernetes.name . To avoid exceptions like java.lang.ClassCastException when upgrading from 2.16.0.Final to 2.16.1.Final #30850 , this kind of mapping is no longer done. As you continue your work with Kubernetes and OpenShift in the context of Quarkus, use the new properties and include the quarkus. prefix where needed. 1.7.2. Core 1.7.2.1. Upgrade to Jandex 3 With this 3.2 release, Jandex becomes part of the SmallRye project, consolidating all Jandex projects into a single repository: https://github.com/smallrye/jandex/ . Consequently, a new release of the Jandex Maven plugin is delivered alongside the Jandex core. This release also changes the Maven coordinates. Replace the old coordinates with the new ones. Table 1.2. Old coordinates and their new counterparts Old coordinates New coordinates org.jboss:jandex io.smallrye:jandex org.jboss.jandex:jandex-maven-plugin io.smallrye:jandex-maven-plugin If you use the Maven Enforcer plugin, configure it to ban any dependencies on org.jboss:jandex . An equivalent plugin is available for Gradle users. 1.7.2.2. Migration path for users of Jandex API Jandex 3 contains many interesting features and improvements. These changes, unfortunately, required a few breaking changes. Here is the recommended migration path: Upgrade to Jandex 2.4.3.Final. This version provides replacements for some methods that have changed in Jandex 3.0.0. For instance, instead of ClassInfo.annotations() , use annotationsMap() , and replace MethodInfo.parameters() with parameterTypes() . Stop using any methods that Jandex has marked as deprecated. Ensure you do not use the return value of Indexer.index() or indexClass() . If you compile your code against Jandex 2.4.3.Final, it can run against both 2.4.3.Final and 3.0.0. However, there are exceptions to this. If you implement the IndexView interface or, in some cases, rely on the UnresolvedTypeVariable class, it is not possible to keep the project compatible with both Jandex 2.4.3 and Jandex 3. Upgrade to Jandex 3.0.0. If you implement the IndexView interface, ensure you implement the methods that have been added. And if you extensively use the Jandex Type hierarchy, verify if you need to handle TypeVariableReference , which is now used to represent recursive type variables. Alongside this release, Jandex introduces a new documentation site . While it's a work in progress, it will become more comprehensive over time. You can also refer to the improved Jandex Javadoc for further information. 1.7.2.3. Removal of io.quarkus.arc.config.ConfigProperties annotation With this 3.2 release, the previously deprecated io.quarkus.arc.config.ConfigProperties annotation has been removed. Instead, use the io.smallrye.config.ConfigMapping annotation to inject multiple related configuration properties. For more information, see the @ConfigMapping section of the "Mapping configuration to objects" guide. 1.7.2.4. Interceptor binding annotations declared on private methods now generate build failures With this 3.2 release, declaring an interceptor binding annotation on a private method is not supported and triggers a build failure; for example: In earlier releases, declaring an interceptor binding annotation on a private method triggered only a warning in logs but was otherwise ignored. This support change aims to prevent unintentional usage of interceptor annotations on private methods because they do not have any effect and can cause confusion. To address this change, remove such annotations from private methods. If removing these annotations is not feasible, you can set the configuration property quarkus.arc.fail-on-intercepted-private-method to false . This setting reverts the system to its behavior, where only a warning is logged. 1.7.2.5. Removal of the @AlternativePriority annotation This release removes the previously deprecated @AlternativePriority annotation. Replace it with both the @Alternative and @Priority annotations. Example: Removed annotation @AlternativePriority(1) Example: Replacement annotations @Alternative @Priority(1) Use jakarta.annotation.Priority with the @Priority annotation instead of io.quarkus.arc.Priority , which is deprecated and planned for removal in a future release. Both annotations perform identical functions. 1.7.2.6. Testing changes: Fixation of the Mockito subclass mockmaker This release updates Mockito version 5.x. Notably, Mockito switched the default mockmaker to inline in its 5.0.0 release . However, to preserve the mocking behavior Quarkus users are familiar with since Quarkus 1.x, and to prevent memory leaks for extensive test suites , Quarkus 3.0 fixes the mockmaker to subclass instead of inline until the latter is fully supported. If you want to force the inline mockmaker, follow these steps: Add the following exclusion to your pom.xml : <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5-mockito</artifactId> <exclusions> <exclusion> <groupId>org.mockito</groupId> <artifactId>mockito-subclass</artifactId> </exclusion> </exclusions> <dependency> Add mockito-core to your dependencies. Mockito 5.3 removed the mockito-inline artifact: you can remove it from your dependencies. 1.7.2.7. Update to the minimum supported Maven version Quarkus has undergone a refactoring of its Maven plugins to support Maven 3.9. As a result, the minimum Maven version supported by Quarkus has been raised from 3.6.2 to 3.8.6 or later. Ensure your development environment is updated accordingly to benefit from the latest improvements and features. 1.7.2.8. Removal of quarkus-bootstrap-maven-plugin With this 3.2 release, the previously-deprecated io.quarkus:quarkus-bootstrap-maven-plugin Maven plugin has been removed. This plugin is for Quarkus extension development only. Therefore, if you are developing custom Quarkus extensions, you must change the artifact ID from io.quarkus:quarkus-bootstrap-maven-plugin to io.quarkus:quarkus-extension-maven-plugin . Note This change relates specifically to custom extension development. For standard application development, you use the quarkus-maven-plugin plugin. 1.7.2.9. Mutiny 2 moves to Java Flow Mutiny is a reactive programming library, the versions 1.x of which were based on the org.reactivestream interfaces, whereas version 2 is based on java.util.concurrent.Flow . These APIs are identical, but the package name has changed. Mutiny offers adapters to bridge between Mutiny 2 (Flow API) and other libraries with legacy reactive streams API. 1.7.3. Data 1.7.3.1. Removal of Hibernate ORM with Panache methods With this 3.2 release, the following previously deprecated methods from Hibernate ORM with Panache and Hibernate ORM with Panache in Kotlin have been removed: io.quarkus.hibernate.orm.panache.PanacheRepositoryBase#getEntityManager(Class<?> clazz) io.quarkus.hibernate.orm.panache.kotlin.PanacheRepositoryBase#getEntityManager(clazz: KClass<Any>) Instead, use the Panache.getEntityManager(Class<?> clazz) method. 1.7.3.2. Enhancement in Hibernate ORM: Automated IN clause parameter padding With this 3.2 release, the Hibernate Object-Relational Mapping (ORM) extension has been changed to incorporate automatic IN clause parameter padding as a default setting. This improvement augments the caching efficiency for queries that incorporate IN clauses. To revert to the functionality and deactivate this feature, you can set the property value of quarkus.hibernate-orm.query.in-clause-parameter-padding to false . 1.7.3.3. New dependency: Hibernate Reactive 2 and Hibernate ORM 6.2 With this 3.2 release, Quarkus depends on the Hibernate Reactive 2 extension instead of Hibernate Reactive 1. This change implies several changes in behavior and database schema expectations that are incompatible with earlier versions. Most of the changes are related to Hibernate Reactive 2 depending on Hibernate ORM 6.2 instead of Hibernate ORM 5.6. Important The Hibernate Reactive 2 extension is available as a Technology Preview in Red Hat build of Quarkus 3.2. For more information, see the following resources: Migration Guide 3.0: Hibernate Reactive Hibernate Reactive: 2.0 series Migration Guide 3.0: Hibernate ORM 5 to 6 migration 1.7.3.4. Hibernate Search changes Changes in the defaults for projectable and sortable on GeoPoint fields With this 3.2 release, Hibernate Search 6.2 changes how defaults are handled for GeoPoint fields. Suppose your Hibernate Search mapping includes GeoPoint fields that use the default value for the projectable option and either the default value or Sortable.NO for the sortable option. In that case, Elasticsearch schema validation fails on startup because of missing doc values on those fields. To prevent that failure, complete either of the following steps: Revert to the defaults by adding projectable = Projectable.NO to the mapping annotation of relevant GeoPoint fields. Recreate your Elasticsearch indexes and reindex your database. The easiest way to do so is to use the MassIndexer with dropAndCreateSchemaOnStart(true) . For more information, see the Data format and schema changes section of the "Hibernate Search 6.2.1.Final: Migration Guide from 6.1". Deprecated or renamed configuration properties With this 3.2 release, the quarkus.hibernate-search-orm.automatic-indexing.synchronization.strategy property is deprecated and is planned for removal in a future version. Use the quarkus.hibernate-search-orm.indexing.plan.synchronization.strategy property instead. Also, the quarkus.hibernate-search-orm.automatic-indexing.enable-dirty-check property is deprecated and is planned for removal in a future version. There is no alternative to replace it. After the removal, it is planned that Search will always trigger reindexing after a transaction modifies an object's field. That is, if a transaction makes the fields "dirty." For more information, see the Configuration changes section of the "Hibernate Search 6.2.1.Final: Migration Guide from 6.1". 1.7.3.5. Hibernate Validator - Validation.buildDefaultValidatorFactory() now returns a ValidatorFactory managed by Quarkus With this 3.2 release, Quarkus doesn't support the manual creation of ValidatorFactory instances. Instead, you must use the Validation.buildDefaultValidatorFactory() method, which returns ValidatorFactory instances managed by Quarkus that you inject through Context and Dependency Injection (CDI). The main reason for this change is that a ValidatorFactory must be carefully crafted to work in native executables. Before this release, you could still manually create a ValidatorFactory instance and handle it yourself if you could make it work. This change aims to improve the compatibility with components creating their own ValidatorFactory . For more information, see the following resources: Hibernate Validator extension and CDI section of the "Validation with Hibernate Validator" guide. ValidatorFactory and native executables section of the "Validation with Hibernate Validator" guide. Obtaining a Validator instance of the "Hibernate Validator 8.0.0.Final - Jakarta Bean Validation Reference Implementation: Reference Guide." 1.7.3.6. Quartz jobs class name change If you are storing jobs for the Quartz extension in a database by using Java Database Connectivity (JDBC), run the following query to update the job class name in your JOB_DETAILS table: UPDATE JOB_DETAILS SET JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerImplUSDInvokerJob' WHERE JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerUSDInvokerJob'; 1.7.3.7. Deprecation of QuarkusTransaction.run and QuarkusTransaction.call methods The QuarkusTransaction.run and QuarkusTransaction.call methods have been deprecated in favor of new, more explicit methods. Update code that relies on these deprecated methods as follows: Before QuarkusTransaction.run(() -> { ... }); QuarkusTransaction.call(() -> { ... }); After QuarkusTransaction.requiringNew().run(() -> { ... }); QuarkusTransaction.requiringNew().call(() -> { ... }); Before QuarkusTransaction.run(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); After QuarkusTransaction.joiningExisting().run(() -> { ... }); QuarkusTransaction.joiningExisting().call(() -> { ... }); Before QuarkusTransaction.run(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); After QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .run(() -> { ... }); QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .call(() -> { ... }); For more information, see the Programmatic Approach section of the "Using transactions in Quarkus" guide. 1.7.3.8. Renamed Narayana transaction manager property With this 3.2 release, the quarkus.transaction-manager.object-store-directory configuration property is renamed to quarkus.transaction-manager.object-store.directory . Update your configuration by replacing the old property name with the new one. 1.7.4. Messaging 1.7.4.1. Removal of vertx-kafka-client dependency from SmallRye Reactive Messaging This release removes the previously deprecated vertx-kafka-client dependency for the smallrye-reactive-messaging-kafka extension. Although it wasn't used for client implementations, vertx-kafka-client provided default Kafka Serialization and Deserialization (SerDes) for io.vertx.core.buffer.Buffer , io.vertx.core.json.JsonObject , and io.vertx.core.json.JsonArray types from the io.vertx.kafka.client.serialization package. If you require this dependency, you can get SerDes for the mentioned types from the io.quarkus.kafka.client.serialization package. 1.7.5. Native 1.7.5.1. Native compilation - Native executables and .so files With this 3.2 release, changes in GraalVM/Mandrel affect the use of extensions reliant on .so files, such as the Java Abstract Window Toolkit (AWT) extension. When using these extensions, you must add or copy the corresponding .so files to the native container; for example: COPY --chown=1001:root target/*.so /work/ COPY --chown=1001:root target/*-runner /work/application Note In this context, the AWT extension provides headless server-side image processing capabilities, not GUI capabilities. 1.7.5.2. Native Compilation - Work around missing CPU features With this 3.2 release, if you build native executables on recent machines and run them on older machines, you might encounter the following failure when starting the application: This error message means that the native compilation used more advanced instruction sets that are unsupported by the CPU running the application. To work around that issue, add the following line to the application.properties file: quarkus.native.additional-build-args=-march=compatibility Then, rebuild your native executable. This setting forces the native compilation to use an older instruction set, increasing the chance of compatibility but decreasing optimization. To explicitly define the target architecture, run native-image -march=list to get a list of supported configurations. Then, specify a target architecture; for example: quarkus.native.additional-build-args=-march=x86-64-v4 If you are experiencing this problem with older AMD64 hosts, try -march=x86-64-v2 before using -march=compatibility . The GraalVM documentation for Native Image Build Options states that "[the -march parameter generates] instructions for a specific machine type. [This parameter] defaults to x86-64-v3 on AMD64 and armv8-a on AArch64. Use -march=compatibility for best compatibility, or -march=native for best performance if a native executable is deployed on the same machine or on a machine with the same CPU features. To list all available machine types, use -march=list ." Note The -march parameter is available only in GraalVM 23 and later. 1.7.5.3. Testing changes: Removal of some annotations With this 3.2 release, the previously deprecated @io.quarkus.test.junit.NativeImageTest and @io.quarkus.test.junit.DisabledOnNativeImageTest annotations have been rimage::images/ref_changes-that-affect-backward-compatibility-88d2f.png[]. Replace them with their new counterparts. Table 1.3. Removed annotations and their new counterparts Removed annotations New annotations @io.quarkus.test.junit.NativeImageTest @io.quarkus.test.junit.QuarkusIntegrationTest @io.quarkus.test.junit.DisabledOnNativeImageTest @io.quarkus.test.junit.DisabledOnIntegrationTest The replacement annotations are functionally equivalent to the removed ones. 1.7.6. Observability 1.7.6.1. Deprecated OpenTracing driver is replaced by OpenTelemetry With this 3.2 release, support for the OpenTracing driver has been deprecated. Removal of the OpenTracing driver is planned for a future Quarkus release. With this 3.2 release, the SmallRye GraphQL extension has replaced its OpenTracing integration with OpenTelemetry. As a result, when using OpenTracing, the extension no longer generates spans for GraphQL operations. Also, with this release, the quarkus.smallrye-graphql.tracing.enabled configuration property is obsolete and has been removed. Instead, the SmallRye GraphQL extension automatically produces spans when the OpenTelemetry extension is present. Update your Quarkus applications to use OpenTelemetry so that they remain compatible with future Quarkus releases. 1.7.6.2. Default metrics format in Micrometer now aligned with Prometheus With this 3.2 release, the Micrometer extension exports metrics in the application/openmetrics-text format by default, in line with the Prometheus standard. This change helps make your data easier to read and interpret. To you get metrics in the earlier format, you can change the Accept request header to text/plain. For example, with the `curl command: curl -H "Accept: text/plain" localhost:8080/q/metrics/ 1.7.6.3. Changes in the OpenTelemetry extension and removal of some sampler-related properties With this 3.2 release, the OpenTelemetry (OTel) extension has significant improvements. Before this release, the OpenTelemetry SDK (OTel SDK) was created at build time and had limited configuration options; most notably, it could not be disabled at run time. Now, it offers enhanced flexibility. It can be disabled at run time by setting quarkus.otel.sdk.disabled=true . After some preparatory steps at build time, the OTel SDK is configured at run time using the OTel auto-configuration feature. This feature supports some of the properties defined in the Java OpenTelemetry SDK. For more information, see the OpenTelemetry SDK Autoconfigure reference. The OpenTelemetry extension is compatible with earlier versions. Most properties have been deprecated but still function alongside the new ones until they are removed in a future release. You can replace the deprecated properties with new ones. Table 1.4. Deprecated properties and their new counterparts Deprecated properties New properties quarkus.opentelemetry.enabled quarkus.otel.enabled quarkus.opentelemetry.tracer.enabled quarkus.otel.traces.enabled quarkus.opentelemetry.propagators quarkus.otel.propagators quarkus.opentelemetry.tracer.suppress-non-application-uris quarkus.otel.traces.suppress-non-application-uris quarkus.opentelemetry.tracer.include-static-resources quarkus.otel.traces.include-static-resources quarkus.opentelemetry.tracer.sampler quarkus.otel.traces.sampler quarkus.opentelemetry.tracer.sampler.ratio quarkus.otel.traces.sampler.arg quarkus.opentelemetry.tracer.exporter.otlp.enabled quarkus.otel.exporter.otlp.enabled quarkus.opentelemetry.tracer.exporter.otlp.headers quarkus.otel.exporter.otlp.traces.headers quarkus.opentelemetry.tracer.exporter.otlp.endpoint quarkus.otel.exporter.otlp.traces.legacy-endpoint With this 3.2 release, some of the old quarkus.opentelemetry.tracer.sampler -related property values have been removed. If the sampler is parent based, there is no need to set the now-dropped quarkus.opentelemetry.tracer.sampler.parent-based property. Replace the following quarkus.opentelemetry.tracer.sampler values with new ones: Table 1.5. Removed sampler property values and their new counterparts Old value New value New value if parent-based on always_on parentbased_always_on off always_off parentbased_always_off ratio traceidratio parentbased_traceidratio Many new properties are now available. For more information, see the Quarkus Using OpenTelemetry guide. Quarkus allowed the Context and Dependency Injection (CDI) configuration of many classes: IdGenerator , Resource attributes, Sampler , and SpanProcessor . This is a feature not available in standard OTel, but it's still provided here for convenience. However, the CDI creation of the SpanProcessor through the LateBoundBatchSpanProcessor is now deprecated. If there's a need to override or customize it, feedback is appreciated. The processor will continue to be used for supporting earlier versions, but soon the standard exports bundled with the OTel SDK will be used. This means the default exporter uses the following configuration: As a preview, the stock OTLP exporter is now available by setting: Additional configurations of the OTel SDK are now available, using the standard Service Provider Interface (SPI) hooks for Sampler and SpanExporter . The remaining SPIs are also accessible, although compatibility validation through testing is still required. For more information, see the updated OpenTelemetry Guide . OpenTelemetry upgrades OpenTelemetry (OTel) 1.23.1 introduced breaking changes, including the following items: HTTP span names are now "{http.method} {http.route}" instead of just "{http.route}" . All methods in all Getter classes in instrumentation-api-semconv have been renamed to use the get() naming scheme. Semantic convention changes: Table 1.6. Deprecated properties and their new counterparts Deprecated properties New properties messaging.destination_kind messaging.destination.kind messaging.destination messaging.destination.name messaging.consumer_id messaging.consumer.id messaging.kafka.consumer_group messaging.kafka.consumer.group JDBC tracing activation Before this release, to activate Java Database Connectivity (JDBC) tracing, you used the following configuration: With this 3.2 release, you can use a much simpler configuration: With this configuration, you do not need to change the database URL or declare a different driver. 1.7.7. Security 1.7.7.1. Removal of CORS filter default support for using a wildcard as an origin The default behavior of the cross-origin resource sharing (CORS) filter has significantly changed. In earlier releases, when the CORS filter was enabled, it supported all origins by default. With this 3.2 release, support for all origins is no longer enabled by default. Now, if you want to permit all origins, you must explicitly configure it to do so. After a thorough evaluation, if you determine that all origins require support, configure the system in the following manner: Same-origin requests receive support without needing the quarkus.http.cors.origins configuration. Therefore, adjusting the quarkus.http.cors.origins becomes essential only when you allow trusted third-party origin requests. In such situations, enabling all origins might pose unnecessary risks. Warning Use this setting with caution to maintain optimal system security. 1.7.7.2. OpenAPI CORS support change With this 3.2 release, OpenAPI has changed its cross-origin resource sharing (CORS) settings and no longer enables wildcard ( * ) origin support by default. This change helps to prevent potential leakage of OpenAPI documents, enhancing the overall security of your applications. Although you can enable wildcard origin support in dev mode , it is crucial to consider the potential security implications. Avoid enabling all origins in a production environment because it exposes your applications to security threats. Ensure your CORS settings align with your production environment's recommended security best practices. 1.7.7.3. Encryption of OIDC session cookie by default With this 3.2 release, the OpenID Connect (OIDC) session cookie, created after the completion of an OIDC Authorization Code Flow, is encrypted by default. In most scenarios, you are unlikely to notice this change. However, if the mTLS or private_key_jwt authentication methods - where the OIDC client private key signs a JSON Web Token (JWT) - are used between Quarkus and the OIDC Provider, an in-memory encryption key gets generated. This key generation can result in some pods failing to decrypt the session cookie, especially in applications dealing with many requests. This situation can arise when a pod attempting to decrypt the cookie isn't the one that encrypted it. If such issues occur, register an encryption secret of 32 characters; for example: An encrypted session cookie can exceed 4096-bytes, which can cause some browsers to ignore it. If this occurs, try one or more of the following steps: Set quarkus.oidc.token-state-manager.split-tokens=true to store ID, access, and refresh tokens in separate cookies. Set quarkus.oidc.token-state-manager.strategy=id-refresh-tokens if there's no need to use the access token as a source of roles to request UserInfo or propagate it to downstream services. Register a custom quarkus.oidc.TokenStateManager Context and Dependency Injection (CDI) bean with the alternative priority set to 1 . If application users access the Quarkus application from within a trusted network, disable the session cookie encryption by applying the following configuration: 1.7.7.4. Default SameSite attribute set to Lax for OIDC session cookie With this 3.2 release, for the Quarkus OpenID Connect (OIDC) extension, the session cookie SameSite attribute is set to Lax by default. In some earlier releases of Quarkus, the OIDC session cookie SameSite attribute was set to Strict by default. This setting introduced unpredictability in how different browsers handled the session cookie. 1.7.7.5. The OIDC ID token audience claim is verified by default With this 3.2 release, the OpenID Connect (OIDC) ID token aud (audience) claim is verified by default. This claim must equal the value of the configured quarkus.oidc.client-id property, as required by the OIDC specification. To override the expected ID token audience value, set the quarkus.oidc.token.audience configuration property. If you deal with a noncompliant OIDC provider that does not set an ID token aud claim, you can set quarkus.oidc.token.audience to any . Warning Setting quarkus.oidc.token.audience to any reduces the security of your 3.2 application. 1.7.7.6. Removal of default password for the JWT key and keystore Before this release, Quarkus used password as the default password for the JSON Web Token (JWT) key and keystore. With this 3.2 release, this default value has been removed. If you are still using the default password, set a new value to replace password for the following properties in the application.properties file: quarkus.oidc-client.credentials.jwt.key-store-password=password quarkus.oidc-client.credentials.jwt.key-password=password 1.7.8. Web 1.7.8.1. Changes to RESTEasy Reactive multipart With this 3.2 release, the following changes impact multipart support in RESTEasy Reactive: Before this release, you could catch all file uploads regardless of the parameter name using the syntax: @RestForm List<FileUpload> all , but this was ambiguous and not intuitive. Now, this form only fetches parameters named all , just like for every other form element of other types, and you must use the following form to catch every parameter regardless of its name: @RestForm(FileUpload.ALL) List<FileUpload> all . Multipart form parameter support has been added to @BeanParam . The @MultipartForm annotation is now deprecated. Use @BeanParam instead of @MultipartForm . The @BeanParam is now optional and implicit for any non-annotated method parameter with fields annotated with any @Rest* or @*Param annotations. Multipart elements are no longer limited to being encapsulated inside @MultipartForm -annotated classes: they can be used as method endpoint parameters and endpoint class fields. Multipart elements now default to the @PartType(MediaType.TEXT_PLAIN) MIME type unless they are of type FileUpload , Path , File , byte[] , or InputStream . Multipart elements of the MediaType.TEXT_PLAIN MIME type are now deserialized using the regular ParamConverter infrastructure. Before this release, deserialization used MessageBodyReader . Multipart elements of the FileUpload , Path , File , byte[] , or InputStream types are special-cased and deserialized by the RESTEasy Reactive extension, not by the MessageBodyReader or ParamConverter classes. Multipart elements of other explicitly set MIME types still use the appropriate MessageBodyReader infrastructure. Multipart elements can now be wrapped in List to obtain all values of the part with the same name. Any client call that includes the @RestForm or @FormParam parameters defaults to the MediaType.APPLICATION_FORM_URLENCODED content type unless they are of the File , Path , Buffer , Multi<Byte> , or byte[] types, in which case it defaults to the MediaType.MULTIPART_FORM_DATA content type. Class org.jboss.resteasy.reactive.server.core.multipart.MultipartFormDataOutput has been moved to org.jboss.resteasy.reactive.server.multipart.MultipartFormDataOutput . Class org.jboss.resteasy.reactive.server.core.multipart.PartItem has been moved to org.jboss.resteasy.reactive.server.multipart.PartItem . Class org.jboss.resteasy.reactive.server.core.multipart.FormData.FormValue has been moved to org.jboss.resteasy.reactive.server.multipart.FormValue . The REST Client no longer uses the server-specific MessageBodyReader and MessageBodyWriter classes associated with Jackson. Before this release, the REST Client unintentionally used those classes. The result is that applications that use both quarkus-resteasy-reactive-jackson and quarkus-rest-client-reactive extensions must now include the quarkus-rest-client-reactive-jackson extension. 1.7.8.2. Enhanced JAXB extension control The JAXB extension detects classes that use JAXB annotations and registers them into the default JAXBContext instance. Before this release, any issues or conflicts between the classes and JAXB triggered a JAXB exception at runtime, providing a detailed description to help troubleshoot the problem. However, you could preemptively tackle these conflicts during the build stage. This release adds a feature that can validate the JAXBContext instance at build time so that you can detect and fix JAXB errors early in the development cycle. For example, as shown in the following code block, binding both classes to the default JAXBContext instance would inevitably lead to a JAXB exception. This is because the classes share the identical name, Model , despite existing in different packages. This concurrent naming creates a conflict, leading to the exception. package org.acme.one; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name1; public String getName1() { return name1; } public void setName1(String name1) { this.name1 = name1; } } package org.acme.two; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name2; public String getName2() { return name2; } public void setName2(String name2) { this.name2 = name2; } } To activate this feature, add the following property: quarkus.jaxb.validate-jaxb-context=true Additionally, this release adds the quarkus.jaxb.exclude-classes property. With this property, you can specify classes to exclude from binding to the JAXBContext . You can provide a comma-separated list of fully qualified class names or a list of packages. For example, to resolve the conflict in the preceding example, you can exclude one or both of the classes: quarkus.jaxb.exclude-classes=org.acme.one.Model,org.acme.two.Model Or you can exclude all the classes under a package: quarkus.jaxb.exclude-classes=org.acme.* 1.8. Bug fixes Red Hat build of Quarkus 3.2 enhances stability and resolves critical bugs, ensuring optimal performance and security. To get the latest fixes for Red Hat build of Quarkus, ensure you are using the latest available version, which is 3.2.12.SP1-redhat-00003. 1.8.1. Security fixes resolved in Red Hat build of Quarkus 3.2.12 SP1 CVE-2024-7254 com.google.protobuf/protobuf : StackOverflow vulnerability in Protocol Buffers CVE-2024-40094 com.graphql-java.graphql-java : Allocation of Resources Without Limits or Throttling in GraphQL Java CVE-2021-44549 org.eclipse.angus/angus-mail : Enabling Secure Server Identity Checks for Safer SMTPS Communication CVE-2024-47561 org.apache.avro/avro : Schema parsing may trigger Remote Code Execution (RCE) 1.8.2. Security fixes resolved in Red Hat build of Quarkus 3.2.12 CVE-2024-2700 io.quarkus/quarkus-core : Leak of local configuration properties into Quarkus applications CVE-2024-29025 io.netty/netty-codec-http : Allocation of resources without limits or throttling 1.8.3. Security fixes resolved in Red Hat build of Quarkus 3.2.11 CVE-2024-1597 org.postgresql/postgresql : pgjdbc : PostgreSQL JDBC Driver vulnerability allows SQL injection with PreferQueryMode=SIMPLE CVE-2024-1979 io.quarkus/quarkus-kubernetes-deployment : Potential information leakage via annotations CVE-2024-1726 io.quarkus.resteasy.reactive/resteasy-reactive : Delayed security checks on certain inherited endpoints in RESTEasy Reactive could lead to denial of service CVE-2024-25710 org.apache.commons/commons-compress : Infinite loop denial of service with corrupted DUMP file CVE-2024-26308 org.apache.commons/commons-compress : OutOfMemoryError caused by unpacking a malformed Pack200 file CVE-2024-1300 io.vertx:vertx-core : Memory leak in TCP servers with TLS and SNI enabled CVE-2024-1023 io.vertx:vertx-core : Memory leak from Netty FastThreadLocal data structures usage in Vert.x 1.8.4. Security fixes resolved in Red Hat build of Quarkus 3.2.10 CVE-2023-22102 mysql/mysql-connector-java : Connector/J unspecified vulnerability (CPU October 2023) CVE-2023-48795 org.apache.sshd/sshd-core : ssh : Prefix truncation attack on Binary Packet Protocol (BPP) CVE-2023-4043 org.eclipse.parsson/parsson : Denial of Service due to large number parsing 1.8.5. Security fixes resolved in Red Hat build of Quarkus 3.2.9.SP1 CVE-2023-5675 io.quarkus.resteasy.reactive/resteasy-reactive : quarkus : Authorization flaw in Quarkus RestEasy Reactive and Classic when "quarkus.security.jaxrs.deny-unannotated-endpoints" or "quarkus.security.jaxrs.default-roles-allowed" properties are used CVE-2023-6267 io.quarkus/quarkus-resteasy : quarkus : JSON payload getting processed prior to security checks when REST resources are used with annotations 1.8.6. Security fixes resolved in Red Hat build of Quarkus 3.2.9 CVE-2023-43642 : snappy-java : Missing upper bound check on chunk length in snappy-java can lead to Denial of Service (DoS) impact CVE-2023-39410 : avro : apache-avro : Apache Avro Java SDK: Memory when deserializing untrusted data in Avro Java SDK 1.8.7. Security fixes resolved in Red Hat build of Quarkus 3.2.6 CVE-2023-33202 : bcpkix : bc-java : Out of memory while parsing ASN.1 crafted data in org.bouncycastle.openssl.PEMParser class CVE-2023-4853 : quarkus-http : quarkus: HTTP security policy bypass CVE-2023-44487 : netty-codec-http2 : HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) 1.8.8. Other enhancements and bug fixes QUARKUS-4279 Manage Mime4j library core, dom, and storage jars in the BOM QUARKUS-3964 Fix tracing protocol configuration to only allow GRPC QUARKUS-3963 Handle generic types for ParamConverter in REST Client QUARKUS-3962 Never register server-specific providers in REST Client (fixed) QUARKUS-3960 Register methods of RESTeasy Reactive parameter containers for reflection QUARKUS-3959 Use an empty string in an SSE event when there is no data QUARKUS-3958 Update the Infinispan client intelligence section documentation QUARKUS-3956 Update the keycloak-admin-client extension to recognize the quarkus.tls.trust-all property QUARKUS-3955 Always run a JPA password action QUARKUS-3954 Reactive REST Client: check for ClientRequestFilter when skipping @Provider auto-discovery QUARKUS-3950 Fix various minor issues in the quarkus update command QUARKUS-3949 Fix Panache bytecode enhancement for @Embeddable records QUARKUS-3948 Save pathParamValues encoded and perform decoding when requested QUARKUS-3947 Fix != expression in @PreAuthorize check QUARKUS-3945 Support using commas to add extensions with CLI QUARKUS-3943 Fixes stork path param resolution in REST Client QUARKUS-3941 Do not expand config properties for Gradle Workers QUARKUS-3940 Verify duplicated context handling when caching a Uni QUARKUS-3939 Always set ssl and alpn for non-plain-text with Vert.x gRPC channel QUARKUS-3851 Upgrade to Hibernate ORM 6.2.18.Final QUARKUS-3841 Hibernate issue with @OneToMany mappedBy association(HHH-16593) QUARKUS-3791 Jandex indexing throws an NPE with the latest Oracle driver QUARKUS-3779 [GSS](3.2.z) RESTEASY-3380 - Source references exposed in RESTEasy error response QUARKUS-3757 Unfiltered traces from the management interface QUARKUS-3420 Duplicate artifacts brought in by extraneous io.quarkus in ER4 QUARKUS-3273 Duplicated artifacts in Ghost QUARKUS-3598 Version alignment with Red Hat Build of Apache Camel for Red Hat build of Quarkus 3.2.0 QUARKUS-3586 Automate step of creating depstobuild.txt QUARKUS-3476 quarkus-bom-deps-to-build.txt not delivered with 2.13.8.SP3.CR1 and 3.2.9.CR1 QUARKUS-3761 Hibernate Reactive doesn't work with Red Hat build of Quarkus 3.2.9.CR1 but works with upstream release QUARKUS-3759 Missing Sources for mvnpm/importmap in 3.2.9 QUARKUS-3764 Red Hat build of Quarkus 3.2.9.CR1 contains 2 Red Hat build of Quarkus BOMs, one of them has 555 missing dependencies in Maven repo zip QUARKUS-3439 Red Hat build of Quarkus create app with gradle causing unresolved netty dependencies QUARKUS-1481 Platform source zips contain only quarkus source QUARKUS-3758 Duplicate Pom for io.github.crac:org-crac and Jboss Threads in 3.2.9 QUARKUS-3377 support quarkus-keycloak-authorization again in 3.2.z QUARKUS-3597 Productize Red Hat build of Quarkus JOSDK extensions 6.3.3 QUARKUS-3424 Increase in number of duplicate artifacts with no direct dependency lineage to platform boms/supported extensions QUARKUS-3582 Red Hat build of Quarkus 3.2: move start-stop metrics and tech empower jobs to JDK17 in performance labs QUARKUS-3570 Adding the JUL URL to the Logging guide update QUARKUS-3571 Make hibernate reactive status clear in docs QUARKUS-3546 Fix handling of HTTP/2 H2 empty frames in RestEasy Reactive QUARKUS-3564 Remove update guide from docs yml QUARKUS-3565 Enhancements to Configuration section of the Logging guide QUARKUS-3566 Applying the QE feedback for the Logging guide QUARKUS-3567 Doc link fixes & enhancements to Bearer token authentication tutorial QUARKUS-3572 Fix doc link Asciidoc change link to xref where applicable QUARKUS-3573 Config doc - Avoid processing methods if not @ConfigMapping QUARKUS-3662 Tiny grammar tweaks for the Authorization of web endpoints guide QUARKUS-3563 Fix title of upx.adoc QUARKUS-3569 Remove 'Security vulnerability detection' topic from downstream doc list QUARKUS-3568 Additional review and application of QE feedback to the Datasource guide QUARKUS-3339 Vert.x SQL client hangs when it inserts null or empty string into Oracle DB QUARKUS-3367 HTTP/1.1 upgrade to H2C cannot process fully request entity with a size greater than the initial window size QUARKUS-3669 Bump Keycloak version to 22.0.6 QUARKUS-3670 Vert.x: fix NPE in ForwardedProxyHandler QUARKUS-3668 Fix dead link in infinispan-client-reference.adoc QUARKUS-3671 Fix quarkus update regression on extensions QUARKUS-3672 Take @ConstrainedTo into account for interceptors QUARKUS-3680 Let custom OIDC token propagation filters customize the exchange status QUARKUS-3679 Update Vert.x version to 4.4.6 QUARKUS-3663 Tiny Vale tweaks for Datasource and Logging guide QUARKUS-3664 Duplicate Authorization Bearer Header Fix QUARKUS-3666 Fixing Db2 Driver typo QUARKUS-3675 Make the ZSTD Substitutions more robust QUARKUS-3677 Fix deployer detection in quarkus-maven-plugin QUARKUS-3676 Fix handling of HTTP/2 H2 empty frames in RestEasy Reactive QUARKUS-3665 More reliable test setup in integration-tests/hibernate-orm-tenancy/datasource QUARKUS-3674 QuarkusSecurityTestExtension after each call should not be made for tests without @TestSecurity QUARKUS-3673 Dev UI: Fix height in Rest Client QUARKUS-3667 Fix assertions in Hibernate ORM 5.6 compatibility tests QUARKUS-3678 ArC: fix PreDestroy callback support for decorators QUARKUS-3691 Prepare for ORM update QUARKUS-3689 Fix issue in Java migration in dev-mode 1.9. Known issues Review the following known issues for insights into Red Hat build of Quarkus 3.2 limitations and workarounds. 1.9.1. Using CDI interceptors to resolve multitenant OIDC configuration fails due to security fix in version 3.2.9.SP1 The security fix implemented in Red Hat build of Quarkus version 3.2.9.SP1 to address CVE-2023-6267 introduced a breaking change. This breaking change is relevant only when using multiple OIDC providers with RestEasy Classic and occurs if you use Context and Dependency Injection (CDI) interceptors to programmatically resolve OIDC tenant configuration identifiers. Before this fix, CDI interceptors ran before authentication checks. After introducing the fix, authentication occurs before CDI interceptors are triggered. Therefore, using CDI interceptors to resolve multiple OIDC provider configuration identifiers no longer works. RestEasy Reactive applications are not affected. Workaround: Use the quarkus.oidc.TenantResolver method to resolve the current OIDC configuration tenant ID. For more information, see the Resolving tenant identifiers with annotations section of the Quarkus "Using OpenID Connect (OIDC) multitenancy" guide. 1.9.2. Podman 4.6 and later does not work with SELinux and Testcontainers library The Ryuk container, which is essential to the testcontainers library used during dev mode, cannot be started when using Podman 4.6 or later. Specifically, these issues manifest when using SELinux and prevent the Ryuk container from starting successfully. Here are the specific issues and corresponding workarounds: Connection to Docker daemon socket fails : By default, an error occurs stating, Permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock ." Workaround: Update the containers.conf file to include label=false . SELinux and containers configuration mismatch : If SELinux is enabled on the operating system but disabled in the containers.conf file, an InternalServerErrorException occurs. Workaround: Run sudo setenforce 0 to disable SELinux . Unresolved Oracle Cloud Infrastructure (OCI) permission error : An error message appears stating, OCI permission denied . This issue has no workaround. Given these issues, consider the following options: Refrain from using Ryuk containers until this known issue is resolved. Use an earlier version of Podman, such as version 4.5.x, that is compatible with Red Hat Enterprise Linux 8 and later. For more information, see the following resources: QUARKUS-3451 - Podman 4.6 and newer does not work properly with SELinux and test-containers testcontainers/ryuk GitHub issue #20206: Ryuk container cannot be started on podman 4.6.2 1.9.3. Containers spawned by Testcontainers occasionally fail Containers spawned by the testcontainers library for dev mode continuous testing occasionally fail with a Broken pipeline error. Workaround: To work around the issue, restart dev mode. This issue does not affect production mode. For more information, see QUARKUS-3448 - Broken pipe when creating containers with Podman . 1.9.4. HTTP/1.1 Upgrades to H2C fail under specific flow control conditions When upgrading an HTTP/1.1 connection to H2C, the server does not account for inbound HTTP messages in the H2 flow controller. This results in unprocessed messages when the window size reaches zero. A fix for this issue is planned for an upcoming release. Workaround: No workaround is available at this time. Until this issue has been fixed, refrain from upgrading HTTP/1.1 connections to H2C if the message payload size exceeds the flow control window size. For more information, see the following resources: Quarkus GitHub issue #35180 - Server fails receiving large data over http/2 Vert.x Pull Request #4802 - HTTP/1.1 upgrade to H2C cannot process fully request entity with a size greater than the initial window size CEQ-7160 - CXF - Netty Http2Exception: Flow control window exceeded for stream: 0 when sending a ~64 KiB attachment QUARKUS-3367 - HTTP/1.1 upgrade to H2C cannot process fully request entity with a size greater than the initial window size . 1.9.5. Reactive Oracle datasource fails with specific Oracle JDBC driver versions The Reactive Oracle datasource relies on Oracle's JavaTM database connectivity (JDBC) driverReactive extensions . A bug exists in Oracle JDBC driver versions 23.2 and 21.11 that causes the failure of the application to receive any response under the following conditions: You use Reactive extensions to run an UPDATE or INSERT query that produces an error such as a constraint violation. You enable generated keys retrieval . Note Oracle might not support using the Oracle JDBC driver v21.10.0.0 with an Oracle 23 database. Workarounds: Change the Oracle JDBC driver version in your pom.xml file or equivalent configuration to com.oracle.database.jdbc:ojdbc11:21.10.0.0 . Avoid running queries that require generated key retrieval. For example, load sequence values before running INSERT queries. For more information, see QUARKUS-3339 - Vertx SQL client hangs when it inserts a null or empty string into Oracle DB . 1.9.6. Community artifacts are used for native Vert.x dependencies on specific platforms Applications that use the Vert.x extension on newly supported platforms, such as Linux on aarch64 and Windows on x86-64 , inadvertently download Quarkus community versions of com.aayushatharva.brotli4j artifacts rather than the ones built and provided by Red Hat. This issue has no functional impact. A fix for this issue is planned for an upcoming release. Workaround: No workaround is available at this time. For more information, see QUARKUS-3314 - com.aayushatharva.brotli4j:native-linux-aarch64 and native-windows-x86_64 are not productized . 1.9.7. Red Hat build of Quarkus Kafka Streams are not supported on Windows due to a missing library Kafka Streams fails to load RocksDB on Windows operating systems because the librocksdbjni-win64.dll native library is not in the Red Hat build of Quarkus. Workaround: There is no workaround for running Quarkus Kafka Streams on Windows. Use non-Windows operating systems until a fix is available or it is confirmed that this extension will not be supported on Windows. Note Kafka Streams is a Technology Preview feature. For more information, see QUARKUS-3434 - Ghost: Quarkus Kafka Streams not supported on Windows due to missing librocksdbjni-win64.dll 1.9.8. Community artifacts are used for some native dependencies on specific platforms Red Hat build of Quarkus has native libraries for x86_64 architectures for the following components. io.netty.netty-transport-native-epoll io.netty.netty-transport-native-unix-common com.aayushatharva.brotli4j However, Red Hat build of Quarkus lacks native libraries for these components on the ppc64le and s390x architectures. Instead, it downloads the Quarkus community versions of the artifacts rather than the ones built and provided by Red Hat. This issue has no functional impact. Workaround: No workaround is available at this time. For more information, see the following resources: QUARKUS-3434 - Ghost: Quarkus Kafka Streams not supported on Windows due to missing librocksdbjni-win64.dll 1.9.9. Dependency on org.apache.maven:maven:pom:3.6.3 might cause proxy issues The dependency on org.apache.maven:maven:pom:3.6.3 might be resolved when using certain Quarkus extensions. This is not specific to the Gradle plugin but impacts any project with io.smallrye:smallrye-parent:pom:37 in its parent Project Object Model (POM) hierarchy. This dependency can cause build failures for environments behind a proxy that restricts access to org.apache.maven artifacts with version 3.6.x. None of the binary packages from Maven 3.6.3 are downloaded as dependencies of the Quarkus core framework or supported Quarkus extensions. Workaround: No workaround is available at this time. For more information, see QUARKUS-1025 - Gradle plugin drags in maven core 3.6.x 1.9.10. Build failure in the starter application generated by JBang with the Red Hat extension registry Building the starter application generated by JBang with Red Hat extension registry might result in an unspecified error when postBuild() runs: Red Hat build of Quarkus does not support this JBang scenario or development tooling. Workaround: No workaround is available at this time. For more information, see QUARKUS-3371 - Application created with jbang can not be built 1.10. Advisories related to this release Before you start using and deploying Red Hat build of Quarkus 3.2.12, review the advisories about enhancements, bug fixes, and CVE fixes for other technologies and services related to the release. 1.10.1. Red Hat build of Quarkus 3.2.12 SP1 RHSA-2024:7676 1.10.2. Red Hat build of Quarkus 3.2.12 RHSA-2024:2705 1.10.3. Red Hat build of Quarkus 3.2.11 RHSA-2024:1662 1.10.4. Red Hat build of Quarkus 3.2.10 RHSA-2024:0722 1.10.5. Red Hat build of Quarkus 3.2.9.SP1 RHSA-2024:0495 1.10.6. Red Hat build of Quarkus 3.2.9 RHEA-2023:7612 1.10.7. Red Hat build of Quarkus 3.2.6 RHEA-2023:5416 1.11. Additional resources Migrating applications to Red Hat build of Quarkus version 3.2 guide. Getting Started with Red Hat build of Quarkus Revised on 2024-10-10 15:18:38 UTC | [
"@Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) @TestSecurity(user = \"userOidc\", roles = \"viewer\") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = \"email\", value = \"[email protected]\") } ) public @interface TestSecurityMetaAnnotation { }",
"quarkus image build docker",
"quarkus image push --registry=<image registry> --registry-username=<registry username> --registry-password-stdin",
"@Entity @NamedQuery(name = \"Person.containsInName\", query = \"from Person where name like CONCAT('%', CONCAT(:name, '%'))\") public class Person extends PanacheEntity { String name; }",
"quarkus.datasource.<TX_LOG>.jdbc.transactions=disabled",
"jakarta.enterprise.inject.spi.DeploymentException: @Transactional does not affect method com.acme.MyBean.myMethod() because the method is private. [...]",
"@AlternativePriority(1)",
"@Alternative @Priority(1)",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5-mockito</artifactId> <exclusions> <exclusion> <groupId>org.mockito</groupId> <artifactId>mockito-subclass</artifactId> </exclusion> </exclusions> <dependency>",
"UPDATE JOB_DETAILS SET JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerImplUSDInvokerJob' WHERE JOB_CLASS_NAME = 'io.quarkus.quartz.runtime.QuartzSchedulerUSDInvokerJob';",
"QuarkusTransaction.run(() -> { ... }); QuarkusTransaction.call(() -> { ... });",
"QuarkusTransaction.requiringNew().run(() -> { ... }); QuarkusTransaction.requiringNew().call(() -> { ... });",
"QuarkusTransaction.run(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .semantic(RunOptions.Semantic.REQUIRED), () -> { ... });",
"QuarkusTransaction.joiningExisting().run(() -> { ... }); QuarkusTransaction.joiningExisting().call(() -> { ... });",
"QuarkusTransaction.run(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... }); QuarkusTransaction.call(QuarkusTransaction.runOptions() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }), () -> { ... });",
"QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .run(() -> { ... }); QuarkusTransaction.requiringNew() .timeout(10) .exceptionHandler((throwable) -> { if (throwable instanceof SomeException) { return RunOptions.ExceptionResult.COMMIT; } return RunOptions.ExceptionResult.ROLLBACK; }) .call(() -> { ... });",
"COPY --chown=1001:root target/*.so /work/ COPY --chown=1001:root target/*-runner /work/application",
"The current machine does not support all of the following CPU features that are required by the image: [CX8, CMOV, FXSR, MMX, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, POPCNT, LZCNT, AVX, AVX2, BMI1, BMI2, FMA]. Please rebuild the executable with an appropriate setting of the -march option.",
"quarkus.native.additional-build-args=-march=compatibility",
"quarkus.native.additional-build-args=-march=x86-64-v4",
"curl -H \"Accept: text/plain\" localhost:8080/q/metrics/",
"quarkus.otel.traces.exporter=cdi",
"quarkus.otel.traces.exporter=otlp",
"quarkus.datasource.jdbc.url=jdbc:otel:postgresql://localhost:5432/mydatabase use the 'OpenTelemetryDriver' instead of the one for your database quarkus.datasource.jdbc.driver=io.opentelemetry.instrumentation.jdbc.OpenTelemetryDriver",
"quarkus.datasource.jdbc.telemetry=true",
"quarkus.http.cors=true quarkus.http.cors.origins=/.*/",
"quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"quarkus.oidc.token-state-manager.encryption-required=false",
"quarkus.oidc-client.credentials.jwt.key-store-password=password quarkus.oidc-client.credentials.jwt.key-password=password",
"package org.acme.one; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name1; public String getName1() { return name1; } public void setName1(String name1) { this.name1 = name1; } } package org.acme.two; import jakarta.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Model { private String name2; public String getName2() { return name2; } public void setName2(String name2) { this.name2 = name2; } }",
"quarkus.jaxb.validate-jaxb-context=true",
"quarkus.jaxb.exclude-classes=org.acme.one.Model,org.acme.two.Model",
"quarkus.jaxb.exclude-classes=org.acme.*",
"[jbang] [ERROR] Issue running postBuild() dev.jbang.cli.ExitException: Issue running postBuild()"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/release_notes_for_red_hat_build_of_quarkus_3.2/assembly_release-notes-quarkus_quarkus-release-notes |
Chapter 8. Using the vSphere Problem Detector Operator | Chapter 8. Using the vSphere Problem Detector Operator 8.1. About the vSphere Problem Detector Operator The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. The Operator runs in the openshift-cluster-storage-operator namespace and is started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. The vSphere Problem Detector Operator communicates with the vSphere vCenter Server to determine the virtual machines in the cluster, the default datastore, and other information about the vSphere vCenter Server configuration. The Operator uses the credentials from the Cloud Credential Operator to connect to vSphere. The Operator runs the checks according to the following schedule: The checks run every hour. If any check fails, the Operator runs the checks again in intervals of 1 minute, 2 minutes, 4, 8, and so on. The Operator doubles the interval up to a maximum interval of 8 hours. When all checks pass, the schedule returns to an hour interval. The Operator increases the frequency of the checks after a failure so that the Operator can report success quickly after the failure condition is remedied. You can run the Operator manually for immediate troubleshooting information. 8.2. Running the vSphere Problem Detector Operator checks You can override the schedule for running the vSphere Problem Detector Operator checks and run the checks immediately. The vSphere Problem Detector Operator automatically runs the checks every hour. However, when the Operator starts, it runs the checks immediately. The Operator is started by the Cluster Storage Operator when the Cluster Storage Operator starts and determines that the cluster is running on vSphere. To run the checks immediately, you can scale the vSphere Problem Detector Operator to 0 and back to 1 so that it restarts the vSphere Problem Detector Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Scale the Operator to 0 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=0 \ -n openshift-cluster-storage-operator Verification Verify that the pods have restarted by running the following command: USD oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w Example output NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s The AGE field must indicate that the pod is restarted. 8.3. Viewing the events from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates events that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the events by using the command line, run the following command: USD oc get event -n openshift-cluster-storage-operator \ --sort-by={.metadata.creationTimestamp} Example output 16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader To view the events by using the OpenShift Container Platform web console, navigate to Home Events and select openshift-cluster-storage-operator from the Project menu. 8.4. Viewing the logs from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates log records that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the logs by using the command line, run the following command: USD oc logs deployment/vsphere-problem-detector-operator \ -n openshift-cluster-storage-operator Example output I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed To view the Operator logs with the OpenShift Container Platform web console, perform the following steps: Navigate to Workloads Pods . Select openshift-cluster-storage-operator from the Projects menu. Click the link for the vsphere-problem-detector-operator pod. Click the Logs tab on the Pod details page to view the logs. 8.5. Configuration checks run by the vSphere Problem Detector Operator The following tables identify the configuration checks that the vSphere Problem Detector Operator runs. Some checks verify the configuration of the cluster. Other checks verify the configuration of each node in the cluster. Table 8.1. Cluster configuration checks Name Description CheckDefaultDatastore Verifies that the default datastore name in the vSphere configuration is short enough for use with dynamic provisioning. If this check fails, you can expect the following: systemd logs errors to the journal such as Failed to set up mount unit: Invalid argument . systemd does not unmount volumes if the virtual machine is shut down or rebooted without draining all the pods from the node. If this check fails, reconfigure vSphere with a shorter name for the default datastore. CheckFolderPermissions Verifies the permission to list volumes in the default datastore. This permission is required to create volumes. The Operator verifies the permission by listing the / and /kubevols directories. The root directory must exist. It is acceptable if the /kubevols directory does not exist when the check runs. The /kubevols directory is created when the datastore is used with dynamic provisioning if the directory does not already exist. If this check fails, review the required permissions for the vCenter account that was specified during the OpenShift Container Platform installation. CheckStorageClasses Verifies the following: The fully qualified path to each persistent volume that is provisioned by this storage class is less than 255 characters. If a storage class uses a storage policy, the storage class must use one policy only and that policy must be defined. CheckTaskPermissions Verifies the permission to list recent tasks and datastores. ClusterInfo Collects the cluster version and UUID from vSphere vCenter. Table 8.2. Node configuration checks Name Description CheckNodeDiskUUID Verifies that all the vSphere virtual machines are configured with disk.enableUUID=TRUE . If this check fails, see the How to check 'disk.EnableUUID' parameter from VM in vSphere Red Hat Knowledgebase solution. CheckNodeProviderID Verifies that all nodes are configured with the ProviderID from vSphere vCenter. This check fails when the output from the following command does not include a provider ID for each node. USD oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID If this check fails, refer to the vSphere product documentation for information about setting the provider ID for each node in the cluster. CollectNodeESXiVersion Reports the version of the ESXi hosts that run nodes. CollectNodeHWVersion Reports the virtual machine hardware version for a node. 8.6. About the storage class configuration check The names for persistent volumes that use vSphere storage are related to the datastore name and cluster ID. When a persistent volume is created, systemd creates a mount unit for the persistent volume. The systemd process has a 255 character limit for the length of the fully qualified path to the VDMK file that is used for the persistent volume. The fully qualified path is based on the naming conventions for systemd and vSphere. The naming conventions use the following pattern: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk The naming conventions require 205 characters of the 255 character limit. The datastore name and the cluster ID are determined from the deployment. The datastore name and cluster ID are substituted into the preceding pattern. Then the path is processed with the systemd-escape command to escape special characters. For example, a hyphen character uses four characters after it is escaped. The escaped value is \x2d . After processing with systemd-escape to ensure that systemd can access the fully qualified path to the VDMK file, the length of the path must be less than 255 characters. 8.7. Metrics for the vSphere Problem Detector Operator The vSphere Problem Detector Operator exposes the following metrics for use by the OpenShift Container Platform monitoring stack. Table 8.3. Metrics exposed by the vSphere Problem Detector Operator Name Description vsphere_cluster_check_total Cumulative number of cluster-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_cluster_check_errors Number of failed cluster-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one cluster-level check failed. vsphere_esxi_version_total Number of ESXi hosts with a specific version. Be aware that if a host runs more than one node, the host is counted only once. vsphere_node_check_total Cumulative number of node-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_node_check_errors Number of failed node-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one node-level check failed. vsphere_node_hw_version_total Number of vSphere nodes with a specific hardware version. vsphere_vcenter_info Information about the vSphere vCenter Server. 8.8. Additional resources About OpenShift Container Platform monitoring | [
"oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator",
"oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w",
"NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s",
"oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}",
"16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader",
"oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator",
"I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed",
"oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID",
"/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_vsphere/using-vsphere-problem-detector-operator |
probe::socket.send | probe::socket.send Name probe::socket.send - Message sent on a socket. Synopsis Values success Was send successful? (1 = yes, 0 = no) protocol Protocol value flags Socket flags value name Name of this probe state Socket state value size Size of message sent (in bytes) or error code if success = 0 type Socket type value family Protocol family value Context The message sender | [
"socket.send"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-send |
14.12.5. Retrieving Storage Volume Information | 14.12.5. Retrieving Storage Volume Information The vol-pool --uuid vol-key-or-path command returns the pool name or UUID for a given volume. By default, the pool name is returned. If the --uuid option is given, the pool UUID is returned instead. The command requires the vol-key-or-path which is the key or path of the volume for which to return the requested information. The vol-path --pool pool-or-uuid vol-name-or-key command returns the path for a given volume. The command requires --pool pool-or-uuid , which is the name or UUID of the storage pool the volume is in. It also requires vol-name-or-key which is the name or key of the volume for which the path has been requested. The vol-name vol-key-or-path command returns the name for a given volume, where vol-key-or-path is the key or path of the volume to return the name for. The vol-key --pool pool-or-uuid vol-name-or-path command returns the volume key for a given volume where --pool pool-or-uuid is the name or UUID of the storage pool the volume is in and vol-name-or-path is the name or path of the volume to return the volume key for. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_volume_commands-retrieving_storage_volume_information |
Chapter 29. Load balancing on RHOSP | Chapter 29. Load balancing on RHOSP 29.1. Using the Octavia OVN load balancer provider driver with Kuryr SDN If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver. Important Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime. Prerequisites Install the RHOSP CLI, openstack . Install the OpenShift Container Platform CLI, oc . Verify that the Octavia OVN driver on RHOSP is enabled. Tip To view a list of available Octavia drivers, on a command line, enter openstack loadbalancer provider list . The ovn driver is displayed in the command's output. Procedure To change from the Octavia Amphora provider driver to Octavia OVN: Open the kuryr-config ConfigMap. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config In the ConfigMap, delete the line that contains kuryr-octavia-provider: default . For example: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1 ... 1 Delete this line. The cluster will regenerate it with ovn as the value. Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller and kuryr-cni pods. This process might take several minutes. Verify that the kuryr-config ConfigMap annotation is present with ovn as its value. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config The ovn provider value is displayed in the output: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn ... Verify that RHOSP recreated its load balancers. On a command line, enter: USD openstack loadbalancer list | grep amphora A single Amphora load balancer is displayed. For example: a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora Search for ovn load balancers by entering: USD openstack loadbalancer list | grep ovn The remaining load balancers of the ovn type are displayed. For example: 2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn 29.2. Scaling clusters for application traffic by using Octavia OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create. If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling. If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling. 29.2.1. Scaling clusters by using Octavia If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it. Prerequisites Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure From a command line, create an Octavia load balancer that uses the Amphora driver: USD openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet> You can use a name of your choice instead of API_OCP_CLUSTER . After the load balancer becomes active, create listeners: USD openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER Note To view the status of the load balancer, enter openstack loadbalancer list . Create a pool that uses the round robin algorithm and has session persistence enabled: USD openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS To ensure that control plane machines are available, create a health monitor: USD openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443 Add the control plane machines as members of the load balancer pool: USD for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done Optional: To reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 29.2.2. Scaling clusters that use Kuryr by using Octavia If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure Optional: From a command line, to reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 29.3. Scaling for ingress traffic by using RHOSP Octavia You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your RHOSP deployment. Procedure To copy the current internal router service, on a command line, enter: USD oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml In the file external_router.yaml , change the values of metadata.name and spec.type to LoadBalancer . Example router file apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2 1 Ensure that this value is descriptive, like router-external-default . 2 Ensure that this value is LoadBalancer . Note You can delete timestamps and other information that is irrelevant to load balancing. From a command line, create a service from the external_router.yaml file: USD oc apply -f external_router.yaml Verify that the external IP address of the service is the same as the one that is associated with the load balancer: On a command line, retrieve the external IP address of the service: USD oc -n openshift-ingress get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h Retrieve the IP address of the load balancer: USD openstack loadbalancer list | grep router-external Example output | 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia | Verify that the addresses you retrieved in the steps are associated with each other in the floating IP list: USD openstack floating ip list | grep 172.30.235.33 Example output | e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c | You can now use the value of EXTERNAL-IP as the new Ingress address. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 29.4. Configuring an external load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 29.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 29.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 29.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment Considerations For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private | [
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1",
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn",
"openstack loadbalancer list | grep amphora",
"a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora",
"openstack loadbalancer list | grep ovn",
"2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn",
"openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>",
"openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER",
"openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS",
"openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443",
"for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP",
"oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml",
"apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2",
"oc apply -f external_router.yaml",
"oc -n openshift-ingress get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h",
"openstack loadbalancer list | grep router-external",
"| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |",
"openstack floating ip list | grep 172.30.235.33",
"| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.1000.0.0.0:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/load-balancing-openstack |
1.6. NOCACHE Option | 1.6. NOCACHE Option When the NOCACHE option is used, the data is retrieved by the original source data query, not from the materialized cache. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/nocache_option |
Chapter 3. Securing the Load-balancing service | Chapter 3. Securing the Load-balancing service To secure communication between the various components of the Red Hat OpenStack Load-balancing service (octavia) uses TLS encryption protocol and public key cryptography. Section 3.1, "Two-way TLS authentication in the Load-balancing service" Section 3.2, "Certificate lifecycles for the Load-balancing service" Section 3.3, "Configuring Load-balancing service certificates and keys" 3.1. Two-way TLS authentication in the Load-balancing service The controller processes of the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) communicate with Load-balancing service instances (amphorae) over a TLS connection. The Load-balancing service validates that both sides are trusted by using two-way TLS authentication. Note This is a simplification of the full TLS handshake process. For more information about the TLS handshake process, see TLS 1.3 RFC 8446 . There are two phases involved in two-way TLS authentication. In Phase one a Controller process, such as the Load-balancing service worker process, connects to a Load-balancing service instance, and the instance presents its server certificate to the Controller. The Controller then validates the server certificate against the server Certificate Authority (CA) certificate stored on the Controller. If the presented certificate is validated against the server CA certificate, the connection proceeds to phase two. In Phase two the Controller presents its client certificate to the Load-balancing service instance. The instance then validates the certificate against the client CA certificate stored inside the instance. If this certificate is successfully validated, the rest of the TLS handshake continues to establish the secure communication channel between the Controller and the Load-balancing service instance. Additional resources Section 3.3, "Configuring Load-balancing service certificates and keys" 3.2. Certificate lifecycles for the Load-balancing service The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) controller uses the server certificate authority certificates and keys to uniquely generate a certificate for each Load-balancing service instance (amphora). The Load-balancing service housekeeping controller process automatically rotates these server certificates as they near their expiration date. The Load-balancing service controller processes use the client certificates. The human operator who manages these TLS certificates usually grants a long expiration period because the certificates are used on the cloud control plane. Additional resources Section 3.3, "Configuring Load-balancing service certificates and keys" 3.3. Configuring Load-balancing service certificates and keys You can configure Red Hat OpenStack Platform (RHOSP) director to generate certificates and keys, or you can supply your own. Configure director to automatically create the required private certificate authorities and issue the necessary certificates. These certificates are for internal Load-balancing service (octavia) communication only and are not exposed to users. Important RHOSP director generates certificates and keys and automatically renews them before they expire. If you use your own certificates, you must remember to renew them. Note Switching from manually generated certificates to automatically generated certificates is not supported by RHOSP director. But you can enforce the re-creation of the certificates by deleting the existing certificates on the Controller nodes in the /var/lib/config-data/puppet-generated/octavia/etc/octavia/certs directory and updating the overcloud. If you must use your own certificates and keys, then complete the following steps: Prerequisites Read and understand, "Changing Load-balancing service default settings." (See link in "Additional resources.") Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a YAML custom environment file. Example In the YAML environment file, add the following parameters with values appropriate for your site: OctaviaCaCert : The certificate for the CA that Octavia uses to generate certificates. OctaviaCaKey : The private CA key used to sign the generated certificates. OctaviaCaKeyPassphrase : The passphrase used with the private CA key above. OctaviaClientCert : The client certificate and un-encrypted key issued by the Octavia CA for the controllers. OctaviaGenerateCerts : The Boolean that instructs director to enable (true) or disable (false) automatic certificate and key generation. Important You must set OctaviaGenerateCerts to false. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Additional resources Section 4.3, "Changing Load-balancing service default settings" Environment files in the Customizing your Red Hat OpenStack Platform deployment guide Including environment files in overcloud creation in the Customizing your Red Hat OpenStack Platform deployment guide | [
"source ~/stackrc",
"vi /home/stack/templates/my-octavia-environment.yaml",
"parameter_defaults: OctaviaCaCert: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV [snip] sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp -----END CERTIFICATE----- OctaviaCaKey: | -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED [snip] -----END RSA PRIVATE KEY-----[ OctaviaClientCert: | -----BEGIN CERTIFICATE----- MIIDmjCCAoKgAwIBAgIBATANBgkqhkiG9w0BAQsFADBcMQswCQYDVQQGEwJVUzEP [snip] 270l5ILSnfejLxDH+vI= -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDU771O8MTQV8RY [snip] KfrjE3UqTF+ZaaIQaz3yayXW -----END PRIVATE KEY----- OctaviaCaKeyPassphrase: b28c519a-5880-4e5e-89bf-c042fc75225d OctaviaGenerateCerts: false [rest of file snipped]",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/secure-lb-service_rhosp-lbaas |
Installing on a single node | Installing on a single node OpenShift Container Platform 4.13 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_a_single_node/index |
4.4. Set the VDB Connection Type via Admin API | 4.4. Set the VDB Connection Type via Admin API You can change a VDB connection type using the changeVDBConnectionType method provided by the Admin interface within the Admin API package ( org.teiid.adminapi ). Javadocs for Red Hat JBoss Data Virtualization can be found on the Red Hat Customer Portal . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/set_the_vdb_connection_type_via_admin_api |
18.3.4. iptables Match Options | 18.3.4. iptables Match Options Different network protocols provide specialized matching options which can be configured to match a particular packet using that protocol. However, the protocol must first be specified in the iptables command. For example -p tcp <protocol-name> (where <protocol-name> is the target protocol), makes options for the specified protocol available. 18.3.4.1. TCP Protocol These match options are available for the TCP protocol ( -p tcp ): --dport - Sets the destination port for the packet. Use either a network service name (such as www or smtp ), port number, or range of port numbers to configure this option. To browse the names and aliases of network services and the port numbers they use, view the /etc/services file. The --destination-port match option is synonymous with --dport . To specify a range of port numbers, separate the two numbers with a colon ( : ), such as -p tcp --dport 3000:3200 . The largest acceptable valid range is 0:65535 . Use an exclamation point character ( ! ) after the --dport option to match all packets which do not use that network service or port. --sport - Sets the source port of the packet using the same options as --dport . The --source-port match option is synonymous with --sport . --syn - Applies to all TCP packets designed to initiate communication, commonly called SYN packets . Any packets that carry a data payload are not touched. Placing an exclamation point character ( ! ) as a flag after the --syn option causes all non-SYN packets to be matched. --tcp-flags - Allows TCP packets with specific set bits, or flags, to match a rule. The --tcp-flags match option accepts two parameters. The first parameter is the mask, which sets the flags to be examined in the packet. The second parameter refers to the flag that must be set to match. The possible flags are: ACK FIN PSH RST SYN URG ALL NONE For example, an iptables rule which contains -p tcp --tcp-flags ACK,FIN,SYN SYN only matches TCP packets that have the SYN flag set and the ACK and FIN flags unset. Using the exclamation point character ( ! ) after --tcp-flags reverses the effect of the match option. --tcp-option - Attempts to match with TCP-specific options that can be set within a particular packet. This match option can also be reversed with the exclamation point character ( ! ). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-iptables-options-match |
Chapter 1. Integrating Service Mesh with OpenShift Serverless | Chapter 1. Integrating Service Mesh with OpenShift Serverless The OpenShift Serverless Operator provides Kourier as the default ingress for Knative. However, you can use Service Mesh with OpenShift Serverless whether Kourier is enabled or not. Integrating with Kourier disabled allows you to configure additional networking and routing options that the Kourier ingress does not support, such as mTLS functionality. Important Integration of Red Hat OpenShift Service Mesh 3 with OpenShift Serverless is not supported. For a list of supported versions, see Red Hat OpenShift Serverless Supported Configurations . Note the following assumptions and limitations: All Knative internal components, as well as Knative Services, are part of the Service Mesh and have sidecars injection enabled. This means that strict mTLS is enforced within the whole mesh. All requests to Knative Services require an mTLS connection, with the client having to send its certificate, except calls coming from OpenShift Routing. OpenShift Serverless with Service Mesh integration can only target one service mesh. Multiple meshes can be present in the cluster, but OpenShift Serverless is only available on one of them. Changing the target ServiceMeshMemberRoll that OpenShift Serverless is part of, meaning moving OpenShift Serverless to another mesh, is not supported. The only way to change the targeted Service mesh is to uninstall and reinstall OpenShift Serverless. 1.1. Prerequisites You have access to an Red Hat OpenShift Serverless account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have installed the Serverless Operator. You have installed the Red Hat OpenShift Service Mesh Operator. The examples in the following procedures use the domain example.com . The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate. To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA. You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com , you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com . For more information about configuring wildcard certificates, see the following topic about Creating a certificate to encrypt incoming external traffic . If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains. For more information, see the OpenShift Serverless documentation about Creating a custom domain mapping . Important OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features. Using Serverless 1.31 with Service Mesh is only supported with Service Mesh version 2.2 or later. For details and information on versions other than 1.31, see the "Red Hat OpenShift Serverless Supported Configurations" page. 1.2. Additional resources Red Hat OpenShift Serverless Supported Configurations Kourier and Istio ingresses 1.3. Creating a certificate to encrypt incoming external traffic By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Procedure Create a root certificate and private key that signs the certificates for your Knative services: USD openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crt Create a wildcard certificate: USD openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csr Sign the wildcard certificate: USD openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crt Create a secret by using the wildcard certificate: USD oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crt This certificate is picked up by the gateways created when you integrate OpenShift Serverless with Service Mesh, so that the ingress gateway serves traffic with this certificate. 1.4. Integrating Service Mesh with OpenShift Serverless 1.4.1. Verifying installation prerequisites Before installing and configuring the Service Mesh integration with Serverless, verify that the prerequisites have been met. Procedure Check for conflicting gateways: Example command USD oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -t Example output knative-serving/knative-ingress-gateway [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}] knative-serving/knative-local-gateway [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}] This command should not return a Gateway that binds port: 443 and hosts: ["*"] , except the Gateways in knative-serving and Gateways that are part of another Service Mesh instance. Note The mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as Gateways , might interfere with the Serverless gateways knative-local-gateway and knative-ingress-gateway . Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding ( hosts: ["*"] ) on the same port ( port: 443 ). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads. Check whether Red Hat OpenShift Service Mesh istio-ingressgateway is exposed as type NodePort or LoadBalancer : Example command USD oc get svc -A | grep istio-ingressgateway Example output istio-system istio-ingressgateway ClusterIP 172.30.46.146 none> 15021/TCP,80/TCP,443/TCP 9m50s This command should not return a Service object of type NodePort or LoadBalancer . Note Cluster external Knative Services are expected to be called via OpenShift Ingress using OpenShift Routes. It is not supported to access Service Mesh directly, such as by exposing the istio-ingressgateway using a Service object with type NodePort or LoadBalancer . 1.4.2. Installing and configuring Service Mesh To integrate Serverless with Service Mesh, you need to install Service Mesh with a specific configuration. Procedure Create a ServiceMeshControlPlane resource in the istio-system namespace with the following configuration: Important If you have an existing ServiceMeshControlPlane object, make sure that you have the same configuration applied. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: profiles: - default security: dataPlane: mtls: true 1 techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s 2 gateways: ingress: service: metadata: labels: knative: ingressgateway 3 proxy: networking: trafficControl: inbound: excludedPorts: 4 - 8444 # metrics - 8022 # serving: wait-for-drain k8s pre-stop hook 1 Enforce strict mTLS in the mesh. Only calls using a valid client certificate are allowed. 2 Serverless has a graceful termination for Knative Services of 30 seconds. istio-proxy needs to have a longer termination duration to make sure no requests are dropped. 3 Define a specific selector for the ingress gateway to target only the Knative gateway. 4 These ports are called by Kubernetes and cluster monitoring, which are not part of the mesh and cannot be called using mTLS. Therefore, these ports are excluded from the mesh. Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: Example servicemesh-member-roll.yaml configuration file apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - knative-eventing - your-OpenShift-projects 1 A list of namespaces to be integrated with Service Mesh. Important This list of namespaces must include the knative-serving and knative-eventing namespaces. Apply the ServiceMeshMemberRoll resource: USD oc apply -f servicemesh-member-roll.yaml Create the necessary gateways so that Service Mesh can accept traffic. The following example uses the knative-local-gateway object with the ISTIO_MUTUAL mode (mTLS): Example istio-knative-gateways.yaml configuration file apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS 2 tls: mode: ISTIO_MUTUAL 3 hosts: - "*" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081 1 Name of the secret containing the wildcard certificate. 2 3 The knative-local-gateway object serves HTTPS traffic and expects all clients to send requests using mTLS. This means that only traffic coming from within Service Mesh is possible. Workloads from outside the Service Mesh must use the external domain via OpenShift Routing. Apply the Gateway resources: USD oc apply -f istio-knative-gateways.yaml 1.4.3. Installing and configuring Serverless After installing Service Mesh, you need to install Serverless with a specific configuration. Procedure Install Knative Serving with the following KnativeServing custom resource, which enables the Istio integration: Example knative-serving-config.yaml configuration file apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" config: istio: 3 gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your-istio-namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your-istio-namespace>.svc.cluster.local 1 Enable Istio integration. 2 Enable sidecar injection for Knative Serving data plane pods. 3 If your istio is not running in the istio-system namespace, you need to set these two flags with the correct namespace. Apply the KnativeServing resource: USD oc apply -f knative-serving-config.yaml Install Knative Eventing with the following KnativeEventing object, which enables the Istio integration: Example knative-eventing-config.yaml configuration file apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: features: istio: enabled 1 workloads: 2 - name: pingsource-mt-adapter labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: imc-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: mt-broker-ingress labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: mt-broker-filter labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 Enable Eventing Istio controller to create a DestinationRule for each InMemoryChannel or KafkaChannel service. 2 Enable sidecar injection for Knative Eventing pods. Apply the KnativeEventing resource: USD oc apply -f knative-eventing-config.yaml Install Knative Kafka with the following KnativeKafka custom resource, which enables the Istio integration: Example knative-kafka-config.yaml configuration file apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> 1 source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> 2 numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: 3 - name: kafka-controller labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-source-dispatcher labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-sink-receiver labels: "sidecar.istio.io/inject": "true" annotations: "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 2 The Apache Kafka cluster URL, for example my-cluster-kafka-bootstrap.kafka:9092 . 3 Enable sidecar injection for Knative Kafka pods. Apply the KnativeEventing object: USD oc apply -f knative-kafka-config.yaml Install ServiceEntry to inform Service Mesh of the communication between KnativeKafka components and an Apache Kafka cluster: Example kafka-cluster-serviceentry.yaml configuration file apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: 1 - <bootstrap_servers_without_port> exportTo: - "." ports: 2 - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONE 1 The list of Apache Kafka cluster hosts, for example my-cluster-kafka-bootstrap.kafka . 2 Apache Kafka cluster listeners ports. Note The listed ports in spec.ports are example TPC ports. The actual values depend on how the Apache Kafka cluster is configured. Apply the ServiceEntry resource: USD oc apply -f kafka-cluster-serviceentry.yaml 1.4.4. Verifying the integration After installing Service Mesh and Serverless with Istio enabled, you can verify that the integration works. Procedure Create a Knative Service that has sidecar injection enabled and uses a pass-through route: Example knative-service.yaml configuration file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: "true" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 3 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: <image_url> 1 A namespace that is part of the service mesh member roll. 2 Instruct Knative Serving to generate a pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly. 3 Inject Service Mesh sidecars into the Knative service pods. Important Always add the annotation from this example to all of your Knative Service to make them work with Service Mesh. Apply the Service resource: USD oc apply -f knative-service.yaml Access your serverless application by using a secure connection that is now trusted by the CA: USD curl --cacert root.crt <service_url> For example, run: Example command USD curl --cacert root.crt https://hello-default.apps.openshift.example.com Example output Hello Openshift! 1.5. Enabling Knative Serving and Knative Eventing metrics when using Service Mesh with mTLS If Service Mesh is enabled with Mutual Transport Layer Security (mTLS), metrics for Knative Serving and Knative Eventing are disabled by default, because Service Mesh prevents Prometheus from scraping metrics. You can enable Knative Serving and Knative Eventing metrics when using Service Mesh and mTLS. Prerequisites You have one of the following permissions to access the cluster: Cluster administrator permissions on OpenShift Container Platform Cluster administrator permissions on Red Hat OpenShift Service on AWS Dedicated administrator permissions on OpenShift Dedicated You have installed the OpenShift CLI ( oc ). You have access to a project with the appropriate roles and permissions to create applications and other workloads. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. Procedure Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR): apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: observability: metrics.backend-destination: "prometheus" ... This step prevents metrics from being disabled by default. Note When you configure ServiceMeshControlPlane with manageNetworkPolicy: false , you must use the annotation on KnativeEventing to ensure proper event delivery. The same mechanism is used for Knative Eventing. To enable metrics for Knative Eventing, you need to specify prometheus as the metrics.backend-destination in the observability spec of the Knative Eventing custom resource (CR) as follows: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: observability: metrics.backend-destination: "prometheus" ... Modify and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec: ... spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444 ... 1.6. Disabling the default network policies The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs). Prerequisites You have one of the following permissions to access the cluster: Cluster administrator permissions on OpenShift Container Platform Cluster administrator permissions on Red Hat OpenShift Service on AWS Dedicated administrator permissions on OpenShift Dedicated You have installed the OpenShift CLI ( oc ). You have access to a project with the appropriate roles and permissions to create applications and other workloads. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. Procedure Add the serverless.openshift.io/disable-istio-net-policies-generation: "true" annotation to your Knative custom resources. Note The OpenShift Serverless Operator generates the required network policies by default. When you configure ServiceMeshControlPlane with manageNetworkPolicy: false , you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs). Annotate the KnativeEventing CR by running the following command: USD oc edit KnativeEventing -n knative-eventing Example KnativeEventing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" Annotate the KnativeServing CR by running the following command: USD oc edit KnativeServing -n knative-serving Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" 1.7. Improving net-istio memory usage by using secret filtering for Service Mesh By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-istio ingress controller, which enables the controllers to only fetch Knative related secrets. The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , is added by default to the net-istio controller pods. Important If you enable secret filtering, you must label all of your secrets with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You can disable the secret filtering by setting the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to false by using the workloads field in the KnativeServing custom resource (CR). Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ... workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-istio-controller | [
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt",
"openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr",
"openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt",
"oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt",
"oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{\"/\"}{@.metadata.name}{\" \"}{@.spec.servers}{\"\\n\"}{end}' | column -t",
"knative-serving/knative-ingress-gateway [{\"hosts\":[\"*\"],\"port\":{\"name\":\"https\",\"number\":443,\"protocol\":\"HTTPS\"},\"tls\":{\"credentialName\":\"wildcard-certs\",\"mode\":\"SIMPLE\"}}] knative-serving/knative-local-gateway [{\"hosts\":[\"*\"],\"port\":{\"name\":\"http\",\"number\":8081,\"protocol\":\"HTTP\"}}]",
"oc get svc -A | grep istio-ingressgateway",
"istio-system istio-ingressgateway ClusterIP 172.30.46.146 none> 15021/TCP,80/TCP,443/TCP 9m50s",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: profiles: - default security: dataPlane: mtls: true 1 techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s 2 gateways: ingress: service: metadata: labels: knative: ingressgateway 3 proxy: networking: trafficControl: inbound: excludedPorts: 4 - 8444 # metrics - 8022 # serving: wait-for-drain k8s pre-stop hook",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - knative-eventing - your-OpenShift-projects",
"oc apply -f servicemesh-member-roll.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS 2 tls: mode: ISTIO_MUTUAL 3 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"oc apply -f istio-knative-gateways.yaml",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" config: istio: 3 gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your-istio-namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your-istio-namespace>.svc.cluster.local",
"oc apply -f knative-serving-config.yaml",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: features: istio: enabled 1 workloads: 2 - name: pingsource-mt-adapter labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: imc-dispatcher labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: mt-broker-ingress labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: mt-broker-filter labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f knative-eventing-config.yaml",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> 1 source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> 2 numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: 3 - name: kafka-controller labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-broker-receiver labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-broker-dispatcher labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-channel-receiver labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-channel-dispatcher labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-source-dispatcher labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-sink-receiver labels: \"sidecar.istio.io/inject\": \"true\" annotations: \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f knative-kafka-config.yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: 1 - <bootstrap_servers_without_port> exportTo: - \".\" ports: 2 - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONE",
"oc apply -f kafka-cluster-serviceentry.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>",
"oc apply -f knative-service.yaml",
"curl --cacert root.crt <service_url>",
"curl --cacert root.crt https://hello-default.apps.openshift.example.com",
"Hello Openshift!",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: observability: metrics.backend-destination: \"prometheus\"",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444",
"oc edit KnativeEventing -n knative-eventing",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: \"true\"",
"oc edit KnativeServing -n knative-serving",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: \"true\"",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-istio-controller"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/integrations/serverless-ossm-setup |
27.2. libStorageMgmt Terminology | 27.2. libStorageMgmt Terminology Different array vendors and storage standards use different terminology to refer to similar functionality. This library uses the following terminology. Storage array Any storage system that provides block access (FC, FCoE, iSCSI) or file access through Network Attached Storage (NAS). Volume Storage Area Network (SAN) Storage Arrays can expose a volume to the Host Bus Adapter (HBA) over different transports, such as FC, iSCSI, or FCoE. The host OS treats it as block devices. One volume can be exposed to many disks if multipath[2] is enabled). This is also known as the Logical Unit Number (LUN), StorageVolume with SNIA terminology, or virtual disk. Pool A group of storage spaces. File systems or volumes can be created from a pool. Pools can be created from disks, volumes, and other pools. A pool may also hold RAID settings or thin provisioning settings. This is also known as a StoragePool with SNIA Terminology. Snapshot A point in time, read only, space efficient copy of data. This is also known as a read only snapshot. Clone A point in time, read writeable, space efficient copy of data. This is also known as a read writeable snapshot. Copy A full bitwise copy of the data. It occupies the full space. Mirror A continuously updated copy (synchronous and asynchronous). Access group Collections of iSCSI, FC, and FCoE initiators which are granted access to one or more storage volumes. This ensures that only storage volumes are accessible by the specified initiators. This is also known as an initiator group. Access Grant Exposing a volume to a specified access group or initiator. The libStorageMgmt library currently does not support LUN mapping with the ability to choose a specific logical unit number. The libStorageMgmt library allows the storage array to select the available LUN for assignment. If configuring a boot from SAN or masking more than 256 volumes be sure to read the OS, Storage Array, or HBA documents. Access grant is also known as LUN Masking. System Represents a storage array or a direct attached storage RAID. File system A Network Attached Storage (NAS) storage array can expose a file system to host an OS through an IP network, using either NFS or CIFS protocol. The host OS treats it as a mount point or a folder containing files depending on the client operating system. Disk The physical disk holding the data. This is normally used when creating a pool with RAID settings. This is also known as a DiskDrive using SNIA Terminology. Initiator In Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), the initiator is the World Wide Port Name (WWPN) or World Wide Node Name (WWNN). In iSCSI, the initiator is the iSCSI Qualified Name (IQN). In NFS or CIFS, the initiator is the host name or the IP address of the host. Child dependency Some arrays have an implicit relationship between the origin (parent volume or file system) and the child (such as a snapshot or a clone). For example, it is impossible to delete the parent if it has one or more depend children. The API provides methods to determine if any such relationship exists and a method to remove the dependency by replicating the required blocks. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-libstoragemgmt-terminology |
Chapter 6. Scheduling NUMA-aware workloads | Chapter 6. Scheduling NUMA-aware workloads Learn about NUMA-aware scheduling and how you can use it to deploy high performance workloads in an OpenShift Container Platform cluster. Important NUMA-aware scheduling is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads. 6.1. About NUMA-aware scheduling Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Co-located resources are said to be in the same NUMA zone . For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone. NUMA architecture allows a CPU with multiple memory controllers to use any available memory across CPU complexes, regardless of where the memory is located. This allows for increased flexibility at the expense of performance. A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. Also, for I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application. High-performance workloads, such as telecommunications workloads, cannot operate to specification under these conditions. NUMA-aware scheduling aligns the requested cluster compute resources (CPUs, memory, devices) in the same NUMA zone to process latency-sensitive or high-performance workloads efficiently. NUMA-aware scheduling also improves pod density per compute node for greater resource efficiency. The default OpenShift Container Platform pod scheduler scheduling logic considers the available resources of the entire compute node, not individual NUMA zones. If the most restrictive resource alignment is requested in the kubelet topology manager, error conditions can occur when admitting the pod to a node. Conversely, if the most restrictive resource alignment is not requested, the pod can be admitted to the node without proper resource alignment, leading to worse or unpredictable performance. For example, runaway pod creation with Topology Affinity Error statuses can occur when the pod scheduler makes suboptimal scheduling decisions for guaranteed pod workloads by not knowing if the pod's requested resources are available. Scheduling mismatch decisions can cause indefinite pod startup delays. Also, depending on the cluster state and resource allocation, poor pod scheduling decisions can cause extra load on the cluster because of failed startup attempts. The NUMA Resources Operator deploys a custom NUMA resources secondary scheduler and other resources to mitigate against the shortcomings of the default OpenShift Container Platform pod scheduler. The following diagram provides a high-level overview of NUMA-aware pod scheduling. Figure 6.1. NUMA-aware scheduling overview NodeResourceTopology API The NodeResourceTopology API describes the available NUMA zone resources in each compute node. NUMA-aware scheduler The NUMA-aware secondary scheduler receives information about the available NUMA zones from the NodeResourceTopology API and schedules high-performance workloads on a node where it can be optimally processed. Node topology exporter The node topology exporter exposes the available NUMA zone resources for each compute node to the NodeResourceTopology API. The node topology exporter daemon tracks the resource allocation from the kubelet by using the PodResources API. PodResources API The PodResources API is local to each node and exposes the resource topology and available resources to the kubelet. Additional resources For more information about running secondary pod schedulers in your cluster and how to deploy pods with a secondary pod scheduler, see Scheduling pods using a secondary scheduler . 6.2. Installing the NUMA Resources Operator NUMA Resources Operator deploys resources that allow you to schedule NUMA-aware workloads and deployments. You can install the NUMA Resources Operator using the OpenShift Container Platform CLI or the web console. 6.2.1. Installing the NUMA Resources Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NUMA Resources Operator: Save the following YAML in the nro-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources Create the Namespace CR by running the following command: USD oc create -f nro-namespace.yaml Create the Operator group for the NUMA Resources Operator: Save the following YAML in the nro-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources Create the OperatorGroup CR by running the following command: USD oc create -f nro-operatorgroup.yaml Create the subscription for the NUMA Resources Operator: Save the following YAML in the nro-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "{product-version}" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f nro-sub.yaml Verification Verify that the installation succeeded by inspecting the CSV resource in the openshift-numaresources namespace. Run the following command: USD oc get csv -n openshift-numaresources Example output NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.10.0 NUMA Resources Operator 4.10.0 Succeeded 6.2.2. Installing the NUMA Resources Operator using the web console As a cluster administrator, you can install the NUMA Resources Operator using the web console. Procedure Install the NUMA Resources Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose NUMA Resources Operator from the list of available Operators, and then click Install . Optional: Verify that the NUMA Resources Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that NUMA Resources Operator is listed in the default project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the default project. 6.3. Creating the NUMAResourcesOperator custom resource When you have installed the NUMA Resources Operator, then create the NUMAResourcesOperator custom resource (CR) that instructs the NUMA Resources Operator to install all the cluster infrastructure needed to support the NUMA-aware scheduler, including daemon sets and APIs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the MachineConfigPool custom resource that enables custom kubelet configurations for worker nodes: Save the following YAML in the nro-machineconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: labels: cnf-worker-tuning: enabled machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" name: worker spec: machineConfigSelector: matchLabels: machineconfiguration.openshift.io/role: worker nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Create the MachineConfigPool CR by running the following command: USD oc create -f nro-machineconfig.yaml Create the NUMAResourcesOperator custom resource: Save the following YAML in the nrop.yaml file: apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 1 Should match the label applied to worker nodes in the related MachineConfigPool CR. Create the NUMAResourcesOperator CR by running the following command: USD oc create -f nrop.yaml Verification Verify that the NUMA Resources Operator deployed successfully by running the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io Example output NAME AGE numaresourcesoperator 10m 6.4. Deploying the NUMA-aware secondary pod scheduler After you install the NUMA Resources Operator, do the following to deploy the NUMA-aware secondary pod scheduler: Configure the pod admittance policy for the required machine profile Create the required machine config pool Deploy the NUMA-aware secondary scheduler Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the KubeletConfig custom resource that configures the pod admittance policy for the machine profile: Save the following YAML in the nro-kubeletconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cnf-worker-tuning spec: machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled kubeletConfig: cpuManagerPolicy: "static" 1 cpuManagerReconcilePeriod: "5s" reservedSystemCPUs: "0,1" memoryManagerPolicy: "Static" 2 evictionHard: memory.available: "100Mi" kubeReserved: memory: "512Mi" reservedMemory: - numaNode: 0 limits: memory: "1124Mi" systemReserved: memory: "512Mi" topologyManagerPolicy: "single-numa-node" 3 topologyManagerScope: "pod" 1 For cpuManagerPolicy , static must use a lowercase s . 2 For memoryManagerPolicy , Static must use an uppercase S . 3 topologyManagerPolicy must be set to single-numa-node . Create the KubeletConfig custom resource (CR) by running the following command: USD oc create -f nro-kubeletconfig.yaml Create the NUMAResourcesScheduler custom resource that deploys the NUMA-aware custom pod scheduler: Save the following YAML in the nro-scheduler.yaml file: apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10" Create the NUMAResourcesScheduler CR by running the following command: USD oc create -f nro-scheduler.yaml Verification Verify that the required resources deployed successfully by running the following command: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 13m pod/numaresourcesoperator-worker-dvj4n 2/2 Running 0 16m pod/numaresourcesoperator-worker-lcg4t 2/2 Running 0 16m pod/secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 16m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 2 2 2 2 2 node-role.kubernetes.io/worker= 16m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 13m deployment.apps/secondary-scheduler 1/1 1 1 16m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7575848485 1 1 1 13m replicaset.apps/secondary-scheduler-56994cf6cf 1 1 1 16m 6.5. Scheduling workloads with the NUMA-aware scheduler You can schedule workloads with the NUMA-aware scheduler using Deployment CRs that specify the minimum required resources to process the workload. The following example deployment uses NUMA-aware scheduling for a sample workload. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Get the name of the NUMA-aware scheduler that is deployed in the cluster by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Create a Deployment CR that uses scheduler named topo-aware-scheduler , for example: Save the following YAML in the nro-deployment.yaml file: apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: "100Mi" cpu: "10" requests: memory: "100Mi" cpu: "10" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: [ "while true; do sleep 1h; done;" ] resources: limits: memory: "100Mi" cpu: "8" requests: memory: "100Mi" cpu: "8" 1 schedulerName must match the name of the NUMA-aware scheduler that is deployed in your cluster, for example topo-aware-scheduler . Create the Deployment CR by running the following command: USD oc create -f nro-deployment.yaml Verification Verify that the deployment was successful: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numa-deployment-1-56954b7b46-pfgw8 2/2 Running 0 129m numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 15h numaresourcesoperator-worker-dvj4n 2/2 Running 0 18h numaresourcesoperator-worker-lcg4t 2/2 Running 0 16h secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 18h Verify that the topo-aware-scheduler is scheduling the deployed pod by running the following command: USD oc describe pod numa-deployment-1-56954b7b46-pfgw8 -n openshift-numaresources Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 130m topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-56954b7b46-pfgw8 to compute-0.example.com Note Deployments that request more resources than is available for scheduling will fail with a MinimumReplicasUnavailable error. The deployment succeeds when the required resources become available. Pods remain in the Pending state until the required resources are available. Verify that the expected allocated resources are listed for the node. Run the following command: USD oc describe noderesourcetopologies.topology.node.k8s.io Example output ... Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node 1 The Available capacity is reduced because of the resources that have been allocated to the guaranteed pod. Resources consumed by guaranteed pods are subtracted from the available node resources listed under noderesourcetopologies.topology.node.k8s.io . Resource allocations for pods with a Best-effort or Burstable quality of service ( qosClass ) are not reflected in the NUMA node resources under noderesourcetopologies.topology.node.k8s.io . If a pod's consumed resources are not reflected in the node resource calculation, verify that the pod has qosClass of Guaranteed by running the following command: USD oc get pod <pod_name> -n <pod_namespace> -o jsonpath="{ .status.qosClass }" Example output Guaranteed 6.6. Troubleshooting NUMA-aware scheduling To troubleshoot common problems with NUMA-aware pod scheduling, perform the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Verify that the noderesourcetopologies CRD is deployed in the cluster by running the following command: USD oc get crd | grep noderesourcetopologies Example output NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z Check that the NUMA-aware scheduler name matches the name specified in your NUMA-aware workloads by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Verify that NUMA-aware scheduable nodes have the noderesourcetopologies CR applied to them. Run the following command: USD oc get noderesourcetopologies.topology.node.k8s.io Example output NAME AGE compute-0.example.com 17h compute-1.example.com 17h Note The number of nodes should equal the number of worker nodes that are configured by the machine config pool ( mcp ) worker definition. Verify the NUMA zone granularity for all scheduable nodes by running the following command: USD oc get noderesourcetopologies.topology.node.k8s.io -o yaml Example output apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:38Z" generation: 63760 name: worker-0 resourceVersion: "8450223" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262352048128" available: "262352048128" capacity: "270107316224" name: memory - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269231067136" available: "269231067136" capacity: "270573244416" name: memory - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:37Z" generation: 62061 name: worker-1 resourceVersion: "8450129" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262391033856" available: "262391033856" capacity: "270146301952" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269192085504" available: "269192085504" capacity: "270534262784" name: memory type: Node kind: List metadata: resourceVersion: "" selfLink: "" 1 Each stanza under zones describes the resources for a single NUMA zone. 2 resources describes the current state of the NUMA zone resources. Check that resources listed under items.zones.resources.available correspond to the exclusive NUMA zone resources allocated to each guaranteed pod. 6.6.1. Checking the NUMA-aware scheduler logs Troubleshoot problems with the NUMA-aware scheduler by reviewing the logs. If required, you can increase the scheduler log level by modifying the spec.logLevel field of the NUMAResourcesScheduler resource. Acceptable values are Normal , Debug , and Trace , with Trace being the most verbose option. Note To change the log level of the secondary scheduler, delete the running scheduler resource and re-deploy it with the changed log level. The scheduler is unavailable for scheduling new workloads during this downtime. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 90m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-debug.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10" logLevel: Debug Create the updated Debug logging NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-debug.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created succesfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler shows the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 6.6.2. Troubleshooting the resource topology exporter Troubleshoot noderesourcetopologies objects where unexpected results are occurring by inspecting the corresponding resource-topology-exporter logs. Note It is recommended that NUMA resource topology exporter instances in the cluster are named for nodes they refer to. For example, a worker node with the name worker should have a corresponding noderesourcetopologies object called worker . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the daemonsets managed by the NUMA Resources Operator. Each daemonset has a corresponding nodeGroup in the NUMAResourcesOperator CR. Run the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath="{.status.daemonsets[0]}" Example output {"name":"numaresourcesoperator-worker","namespace":"openshift-numaresources"} Get the label for the daemonset of interest using the value for name from the step: USD oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath="{.spec.selector.matchLabels}" Example output {"name":"resource-topology"} Get the pods using the resource-topology label by running the following command: USD oc get pods -n openshift-numaresources -l name=resource-topology -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com Examine the logs of the resource-topology-exporter container running on the worker pod that corresponds to the node you are troubleshooting. Run the following command: USD oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c Example output I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: "0": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved "0-1" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online "0-103" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable "2-103" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi 6.6.3. Correcting a missing resource topology exporter config map If you install the NUMA Resources Operator in a cluster with misconfigured cluster settings, in some circumstances, the Operator is shown as active but the logs of the resource topology exporter (RTE) daemon set pods show that the configuration for the RTE is missing, for example: Info: couldn't find configuration in "/etc/resource-topology-exporter/config.yaml" This log message indicates that the kubeletconfig with the required configuration was not properly applied in the cluster, resulting in a missing RTE configmap . For example, the following cluster is missing a numaresourcesoperator-worker configmap custom resource (CR): USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h In a correctly configured cluster, oc get configmap also returns a numaresourcesoperator-worker configmap CR. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Compare the values for spec.machineConfigPoolSelector.matchLabels in kubeletconfig and metadata.labels in the MachineConfigPool ( mcp ) worker CR using the following commands: Check the kubeletconfig labels by running the following command: USD oc get kubeletconfig -o yaml Example output machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled Check the mcp labels by running the following command: USD oc get mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" The cnf-worker-tuning: enabled label is not present in the MachineConfigPool object. Edit the MachineConfigPool CR to include the missing label, for example: USD oc edit mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" cnf-worker-tuning: enabled Apply the label changes and wait for the cluster to apply the updated configuration. Run the following command: Verification Check that the missing numaresourcesoperator-worker configmap CR is applied: USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources",
"oc create -f nro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"oc create -f nro-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"{product-version}\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nro-sub.yaml",
"oc get csv -n openshift-numaresources",
"NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.10.0 NUMA Resources Operator 4.10.0 Succeeded",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: labels: cnf-worker-tuning: enabled machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" name: worker spec: machineConfigSelector: matchLabels: machineconfiguration.openshift.io/role: worker nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc create -f nro-machineconfig.yaml",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1",
"oc create -f nrop.yaml",
"oc get numaresourcesoperators.nodetopology.openshift.io",
"NAME AGE numaresourcesoperator 10m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cnf-worker-tuning spec: machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled kubeletConfig: cpuManagerPolicy: \"static\" 1 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" memoryManagerPolicy: \"Static\" 2 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 3 topologyManagerScope: \"pod\"",
"oc create -f nro-kubeletconfig.yaml",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10\"",
"oc create -f nro-scheduler.yaml",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 13m pod/numaresourcesoperator-worker-dvj4n 2/2 Running 0 16m pod/numaresourcesoperator-worker-lcg4t 2/2 Running 0 16m pod/secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 16m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 2 2 2 2 2 node-role.kubernetes.io/worker= 16m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 13m deployment.apps/secondary-scheduler 1/1 1 1 16m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7575848485 1 1 1 13m replicaset.apps/secondary-scheduler-56994cf6cf 1 1 1 16m",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"",
"oc create -f nro-deployment.yaml",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numa-deployment-1-56954b7b46-pfgw8 2/2 Running 0 129m numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 15h numaresourcesoperator-worker-dvj4n 2/2 Running 0 18h numaresourcesoperator-worker-lcg4t 2/2 Running 0 16h secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 18h",
"oc describe pod numa-deployment-1-56954b7b46-pfgw8 -n openshift-numaresources",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 130m topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-56954b7b46-pfgw8 to compute-0.example.com",
"oc describe noderesourcetopologies.topology.node.k8s.io",
"Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node",
"oc get pod <pod_name> -n <pod_namespace> -o jsonpath=\"{ .status.qosClass }\"",
"Guaranteed",
"oc get crd | grep noderesourcetopologies",
"NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"oc get noderesourcetopologies.topology.node.k8s.io",
"NAME AGE compute-0.example.com 17h compute-1.example.com 17h",
"oc get noderesourcetopologies.topology.node.k8s.io -o yaml",
"apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 90m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10\" logLevel: Debug",
"oc create -f nro-scheduler-debug.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"",
"{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}",
"oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"",
"{\"name\":\"resource-topology\"}",
"oc get pods -n openshift-numaresources -l name=resource-topology -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com",
"oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c",
"I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi",
"Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc get kubeletconfig -o yaml",
"machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled",
"oc get mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"oc edit mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/cnf-numa-aware-scheduling |
4.12. Bind Mounts and Context-Dependent Path Names | 4.12. Bind Mounts and Context-Dependent Path Names GFS2 file systems do not provide support for Context-Dependent Path Names (CDPNs), which allow you to create symbolic links that point to variable destination files or directories. For this functionality in GFS2, you can use the bind option of the mount command. The bind option of the mount command allows you to remount part of a file hierarchy at a different location while it is still available at the original location. The format of this command is as follows. After executing this command, the contents of the olddir directory are available at two locations: olddir and newdir . You can also use this option to make an individual file available at two locations. For example, after executing the following commands the contents of /root/tmp will be identical to the contents of the previously mounted /var/log directory. Alternately, you can use an entry in the /etc/fstab file to achieve the same results at mount time. The following /etc/fstab entry will result in the contents of /root/tmp being identical to the contents of the /var/log directory. After you have mounted the file system, you can use the mount command to see that the file system has been mounted, as in the following example. With a file system that supports Context-Dependent Path Names, you might have defined the /bin directory as a Context-Dependent Path Name that would resolve to one of the following paths, depending on the system architecture. You can achieve this same functionality by creating an empty /bin directory. Then, using a script or an entry in the /etc/fstab file, you can mount each of the individual architecture directories onto the /bin directory with a mount -bind command. For example, you can use the following command as a line in a script. Alternately, you can use the following entry in the /etc/fstab file. A bind mount can provide greater flexibility than a Context-Dependent Path Name, since you can use this feature to mount different directories according to any criteria you define (such as the value of %fill for the file system). Context-Dependent Path Names are more limited in what they can encompass. Note, however, that you will need to write your own script to mount according to a criteria such as the value of %fill . Warning When you mount a file system with the bind option and the original file system was mounted rw , the new file system will also be mounted rw even if you use the ro flag; the ro flag is silently ignored. In this case, the new file system might be marked as ro in the /proc/mounts directory, which may be misleading. | [
"mount --bind olddir newdir",
"cd ~root mkdir ./tmp mount --bind /var/log /root/tmp",
"/var/log /root/tmp none bind 0 0",
"mount | grep /tmp /var/log on /root/tmp type none (rw,bind)",
"/usr/i386-bin /usr/x86_64-bin /usr/ppc64-bin",
"mount --bind /usr/i386-bin /bin",
"/usr/1386-bin /bin none bind 0 0"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-manage-pathnames |
Chapter 8. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure | Chapter 8. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.12, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 8.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 8.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 8.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 8.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 8.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 8.6. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 8.6.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 8.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 8.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 8.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you specify nodes or groups of nodes on different VLANs for a cluster that you want to install on user-provisioned infrastructure, you must ensure that machines in your cluster meet the requirements outlined in the "Network connectivity requirements" section of the Networking requirements for user-provisioned infrastructure document. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 8.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 8.6.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 8.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 8.6.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.6.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 8.6.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 8.6.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 8.6.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 8.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 8.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 8.6.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 8.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 8.6.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 8.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 8.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 8.6.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 8.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 8.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 8.6.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 8.6. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 8.7. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 8.8. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 8.9. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 8.10. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 8.12. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 8.11. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 8.11.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 8.11.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.11.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 8.12. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 8.13. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 8.14. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 8.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 8.16. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 8.17. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 8.18. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 8.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 8.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 8.21.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 8.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 8.21.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 8.21.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 8.21.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 8.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 8.23. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 8.24. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 8.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.26. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vsphere/installing-restricted-networks-vsphere |
Dynamic plugins reference | Dynamic plugins reference Red Hat Developer Hub 1.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/dynamic_plugins_reference/index |
Chapter 2. General principles for selecting hardware | Chapter 2. General principles for selecting hardware As a storage administrator, you must select the appropriate hardware for running a production Red Hat Ceph Storage cluster. When selecting hardware for Red Hat Ceph Storage, review these following general principles. These principles will help save time, avoid common mistakes, save money and achieve a more effective solution. 2.1. Prerequisites A planned use for Red Hat Ceph Storage. 2.2. Identify performance use case One of the most important steps in a successful Ceph deployment is identifying a price-to-performance profile suitable for the cluster's use case and workload. It is important to choose the right hardware for the use case. For example, choosing IOPS-optimized hardware for a cold storage application increases hardware costs unnecessarily. Whereas, choosing capacity-optimized hardware for its more attractive price point in an IOPS-intensive workload will likely lead to unhappy users complaining about slow performance. The primary use cases for Ceph are: IOPS optimized: IOPS optimized deployments are suitable for cloud computing operations, such as running MYSQL or MariaDB instances as virtual machines on OpenStack. IOPS optimized deployments require higher performance storage such as 15k RPM SAS drives and separate flash based BlueStore Metadata device to handle frequent write operations. Some high IOPS scenarios use all flash storage to improve IOPS and total throughput. Throughput optimized: Throughput-optimized deployments are suitable for serving up significant amounts of data, such as graphic, audio and video content. Throughput-optimized deployments require networking hardware, controllers and hard disk drives with acceptable total throughput characteristics. In cases where write performance is a requirement, flash based BlueStore Metadata device will substantially improve write performance. Capacity optimized: Capacity-optimized deployments are suitable for storing significant amounts of data as inexpensively as possible. Capacity-optimized deployments typically trade performance for a more attractive price point. For example, capacity-optimized deployments often use slower and less expensive SATA drives. This document provides examples of Red Hat tested hardware suitable for these use cases. 2.3. Consider storage density Hardware planning should include distributing Ceph daemons and other processes that use Ceph across many hosts to maintain high availability in the event of hardware faults. Balance storage density considerations with the need to rebalance the cluster in the event of hardware faults. A common hardware selection mistake is to use very high storage density in small clusters, which can overload networking during backfill and recovery operations. 2.4. Identical hardware configuration Create pools and define CRUSH hierarchies such that the OSD hardware within the pool is identical. Same controller. Same drive size. Same RPMs. Same seek times. Same I/O. Same network throughput. Using the same hardware within a pool provides a consistent performance profile, simplifies provisioning and streamlines troubleshooting. Warning When using multiple storage devices, sometimes during reboot, order of devices might change. For troubleshooting this issue, see Change order of Storage devices during reboot 2.5. Network considerations Carefully consider bandwidth requirements for the cluster network, be mindful of network link oversubscription, and segregate the intra-cluster traffic from the client-to-cluster traffic. Important Red Hat recommends using 10 GB Ethernet for Ceph production deployments. 1 GB Ethernet is not suitable for production storage clusters. In the case of a drive failure, replicating 1 TB of data across a 1Gbps network takes 3 hours, and 3 TB takes 9 hours. 3 TB is the typical drive configuration. By contrast, with a 10 GB network, the replication times would be 20 minutes and 1 hour respectively. Remember that when an OSD fails, the cluster will recover by replicating the data it contained to other OSDs within the pool. The failure of a larger domain such as a rack means that the storage cluster will utilize considerably more bandwidth. Storage administrators usually prefer that a cluster recovers as quickly as possible. At a minimum , a single 10 GB Ethernet link should be used for storage hardware. If the Ceph nodes have many drives each, add additional 10 GB Ethernet links for connectivity and throughput. Important Set up front and backside networks on separate NICs. Ceph supports a public (front-side) network and a cluster (back-side) network. The public network handles client traffic and communication with Ceph monitors. The cluster (back-side) network handles OSD heartbeats, replication, backfilling and recovery traffic. Note Red Hat recommends allocating bandwidth to the cluster (back-side) network such that it is a multiple of the front-side network using osd_pool_default_size as the basis for your multiple on replicated pools. Red Hat also recommends running the public and cluster networks on separate NICs. When building a storage cluster consisting of multiple racks (common for large storage implementations), consider utilizing as much network bandwidth between switches in a "fat tree" design for optimal performance. A typical 10 GB Ethernet switch has 48 10 GB ports and four 40 GB ports. Use the 40 GB ports on the spine for maximum throughput. Alternatively, consider aggregating unused 10Gbps ports with QSFP+ and SFP+ cables into more 40 GB ports to connect to another rack and spine routers. Important For network optimization, Red Hat recommends using jumbo frames for a better CPU/bandwidth ratio, and a non-blocking network switch back-plane. Red Hat Ceph Storage requires the same MTU value throughout all networking devices in the communication path, end-to-end for both public and cluster networks. Verify that the MTU value is the same on all nodes and networking equipment in the environment before using a Red Hat Ceph Storage cluster in production. Additional Resources See the Verifying and configuring the MTU value section in the Red Hat Ceph Storage Configuration Guide for more details. 2.6. Avoid using RAID solutions Ceph can replicate or erasure code objects. RAID duplicates this functionality on the block level and reduces available capacity. Consequently, RAID is an unnecessary expense. Additionally, a degraded RAID will have a negative impact on performance. Important Red Hat recommends that each hard drive be exported separately from the RAID controller as a single volume with write-back caching enabled. This requires a battery-backed, or a non-volatile flash memory device on the storage controller. It is important to make sure the battery is working, as most controllers will disable write-back caching if the memory on the controller can be lost as a result of a power failure. Periodically check the batteries and replace them if necessary, as they do degrade over time. See the storage controller vendor's documentation for details. Typically, the storage controller vendor provides storage management utilities to monitor and adjust the storage controller configuration without any downtime. Using Just a Bunch of Drives (JBOD) in independent drive mode with Ceph is supported when using all Solid State Drives (SSDs), or for configurations with high numbers of drives per controller. For example, 60 drives attached to one controller. In this scenario, the write-back caching can become a source of I/O contention. Since JBOD disables write-back caching, it is ideal in this scenario. One advantage of using JBOD mode is the ease of adding or replacing drives and then exposing the drive to the operating system immediately after it is physically plugged in. 2.7. Summary of common mistakes when selecting hardware Repurposing underpowered legacy hardware for use with Ceph. Using dissimilar hardware in the same pool. Using 1Gbps networks instead of 10Gbps or greater. Neglecting to setup both public and cluster networks. Using RAID instead of JBOD. Selecting drives on a price basis without regard to performance or throughput. Having a disk controller with insufficient throughput characteristics. Use the examples in this document of Red Hat tested configurations for different workloads to avoid some of the foregoing hardware selection mistakes. 2.8. Additional Resources Supported configurations article on the Red Hat Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/hardware_guide/general-principles-for-selecting-hardware |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager Hybrid Cloud Console . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager Hybrid Cloud Console are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on, such as Amazon Web Services, and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager Hybrid Cloud Console every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager Hybrid Cloud Console . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-view role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n \ openshift-monitoring -o jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}' \ --data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \ --data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \ --data-urlencode 'match[]={__name__="os_image_url_override:sum"}' 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Registering your disconnected cluster Register your disconnected OpenShift Container Platform cluster on the Red Hat Hybrid Cloud Console so that your cluster is not impacted by the consequences listed in the section named "Consequences of disabling remote health reporting". Important By registering your disconnected cluster, you can continue to report your subscription usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends associated with your subscription, so that you can use the returned information to better organize subscription allocations across all of your resources. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . You can log in to the Red Hat Hybrid Cloud Console. Procedure Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console. Optional: To access the Register disconnected cluster web page from the home page of the Red Hat Hybrid Cloud Console, go to the Clusters navigation menu item and then select the Register cluster button. Enter your cluster's details in the provided fields on the Register disconnected cluster page. From the Subscription settings section of the page, select the subcription settings that apply to your Red Hat subscription offering. To register your disconnected cluster, select the Register cluster button. Additional resources Consequences of disabling remote health reporting How does the subscriptions service show my subscription data? (Getting Started with the Subscription Service) 4.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Enabling remote health reporting If you or your organization have disabled remote health reporting, you can enable this feature again. You can see that remote health reporting is disabled from the message "Insights not available" in the Status tile on the OpenShift Container Platform Web Console Overview page. To enable remote health reporting, you must Modify the global cluster pull secret with a new authorization token. Note Enabling remote health reporting enables both Insights Operator and Telemetry. 4.4.1. Modifying your global cluster pull secret to enable remote health reporting You can modify your existing global cluster pull secret to enable remote health reporting. If you have previously disabled remote health monitoring, you must first download a new pull secret with your console.openshift.com access token from Red Hat OpenShift Cluster Manager. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to OpenShift Cluster Manager. Procedure Navigate to https://console.redhat.com/openshift/downloads . From Tokens Pull Secret , click Download . The file pull-secret.txt containing your cloud.openshift.com access token in JSON format downloads: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": " <email_address> " } } } Download the global cluster pull secret to your local file system. USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull-secret Make a backup copy of your pull secret. USD cp pull-secret pull-secret-backup Open the pull-secret file in a text editor. Append the cloud.openshift.com JSON entry from pull-secret.txt into auths . Save the file. Update the secret in your cluster. oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret It may take several minutes for the secret to update and your cluster to begin reporting. Verification Navigate to the OpenShift Container Platform Web Console Overview page. Insights in the Status tile reports the number of issues found. 4.5. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.5.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.5.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager Hybrid Cloud Console . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.5.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.5.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.5.5.1. Filtering Insights advisor recommendations As an OpenShift Container Platform cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.5.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.5.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.5.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.5.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager Hybrid Cloud Console . Prerequisites Your cluster is registered in OpenShift Cluster Manager Hybrid Cloud Console . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.6. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.6.1. Understanding Insights Operator alerts The Insights Operator declares alerts through the Prometheus monitoring system to the Alertmanager. You can view these alerts in the Alerting UI in the OpenShift Container Platform web console by using one of the following methods: In the Administrator perspective, click Observe Alerting . In the Developer perspective, click Observe <project_name> Alerts tab. Currently, Insights Operator sends the following alerts when the conditions are met: Table 4.1. Insights Operator alerts Alert Description InsightsDisabled Insights Operator is disabled. SimpleContentAccessNotAvailable Simple content access is not enabled in Red Hat Subscription Management. InsightsRecommendationActive Insights has an active recommendation for the cluster. 4.6.2. Disabling Insights Operator alerts To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you edit the support secret. If the support secret doesn't exist, you must create it when you first add custom configurations. Note that configurations within the support secret take precedence over the default settings defined in the pod.yaml file. To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you edit the support secret. Note that this secret is created by default. The configurations stored in the support secret take precedence over any default settings specified in the pod.yaml file. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . On the Secrets page, select All Projects from the Project list, and then set Show default projects to on. Select the openshift-config project from the Projects list. Search for the support secret by using the Search by name field. If the secret exists: Click the Options menu , and then click Edit Secret . Click Add Key/Value . In the Key field, enter disableInsightsAlerts . In the Value field, enter True . If the secret does not exist: Click Create Key/value secret . In the Secret name field, enter support . In the Key field, enter disableInsightsAlerts . In the Value field, enter True . Click Create . After you save the changes, Insights Operator no longer sends alerts to the cluster Prometheus instance. 4.6.3. Enabling Insights Operator alerts When alerts are disabled, the Insights Operator no longer sends alerts to the cluster Prometheus instance. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . On the Secrets page, select All Projects from the Project list, and then set Show default projects to ON . Select the openshift-config project from the Projects list. Search for the support secret by using the Search by name field. Click the Options menu , and then click Edit Secret . For the disableInsightsAlerts key, set the Value field to false . After you save the changes, Insights Operator again sends alerts to the cluster Prometheus instance. 4.6.4. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.6.5. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6.6. Disabling the Insights Operator gather operations You can disable the Insights Operator gather operations. Disabling the gather operations gives you the ability to increase privacy for your organization as Insights Operator will no longer gather and send Insights cluster reports to Red Hat. This will disable Insights analysis and recommendations for your cluster without affecting other core functions that require communication with Red Hat such as cluster transfers. You can view a list of attempted gather operations for your cluster from the /insights-operator/gathers.json file in your Insights Operator archive. Be aware that some gather operations only occur when certain conditions are met and might not appear in your most recent archive. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Disable the gather operations by performing one of the following edits to the InsightsDataGather configuration file: To disable all the gather operations, enter all under the disabledGatherers key: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: 1 gatherConfig: disabledGatherers: - all 2 1 The spec parameter specifies gather configurations. 2 The all value disables all gather operations. To disable individual gather operations, enter their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Example individual gather operation Click Save . After you save the changes, the Insights Operator gather configurations are updated and the operations will no longer occur. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.7. Enabling the Insights Operator gather operations You can enable the Insights Operator gather operations, if the gather operations have been disabled. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Enable the gather operations by performing one of the following edits: To enable all disabled gather operations, remove the gatherConfig stanza: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: gatherConfig: 1 disabledGatherers: all 1 Remove the gatherConfig stanza to enable all gather operations. To enable individual gather operations, remove their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Remove one or more gather operations. Click Save . After you save the changes, the Insights Operator gather configurations are updated and the affected gather operations start. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.8. Configuring Insights Operator You can configure Insights Operator to meet the needs of your organization. The Insights Operator is configured using a combination of the default configurations in the pod.yaml file in the Insights Operator Config directory and the configurations stored in the support secret in the openshift-config namespace. The support secret does not exist by default and must be created when adding custom configurations for the first time. Configurations in the support secret override the defaults set in the pod.yaml file. The table below describes the available configuration attributes: Table 4.2. Insights Operator configurable attributes Attribute name Description Value type Default value enableGlobalObfuscation Enables the global obfuscation of IP addresses and the cluster domain name Boolean false scaInterval Specifies the frequency of the simple content access entitlements download Time interval 8h scaPullDisabled Disables the simple content access entitlements download Boolean false clusterTransferInterval Specifies how often Insights Operator checks OpenShift Cluster Manager for available cluster transfers Time interval 24h disableInsightsAlerts Disables Insights Operator alerts to the cluster Prometheus instance Boolean False httpProxy , httpsProxy , noProxy Set custom proxy for Insights Operator URL no default This procedure describes how to set custom Insights Operator configurations. Important Red Hat recommends you consult Red Hat Support before making changes to the default Insights Operator configuration. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with cluster-admin role. Procedure Navigate to Workloads Secrets . On the Secrets page, select All Projects from the Project list, and then set Show default projects to on. Select the openshift-config project from the Project list. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu for the secret, and then click Edit Secret . Click Add Key/Value . Enter an attribute name with an appropriate value (see table above), and click Save . Repeat the above steps for any additional configurations. 4.7. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.7.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Example output apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights # ... spec: template: # ... spec: containers: - args: # ... image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 # ... 1 Specifies your insights-operator image version. Paste your image version in gather-job.yaml : apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job # ... spec: # ... template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts: 1 Replace any existing value with your insights-operator image version. Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Name: insights-operator-job Namespace: openshift-insights # ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job> where insights-operator-job-<your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.7.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Clusters menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.7.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.8. Importing simple content access entitlements with Insights Operator Insights Operator periodically imports your simple content access entitlements from OpenShift Cluster Manager Hybrid Cloud Console and stores them in the etc-pki-entitlement secret in the openshift-config-managed namespace. Simple content access is a capability in Red Hat subscription tools which simplifies the behavior of the entitlement tooling. This feature makes it easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. Insights Operator imports simple content access entitlements every eight hours, but can be configured or disabled using the support secret in the openshift-config namespace. Note Simple content access must be enabled in Red Hat Subscription Management for the importing to function. Additional resources See About simple content access in the Red Hat Subscription Central documentation, for more information about simple content access. See Using Red Hat subscriptions in builds for more information about using simple content access entitlements in OpenShift Container Platform builds. 4.8.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the simple content access entitlements by using the support secret in the openshift-config namespace. The entitlement import normally occurs every eight hours, but you can shorten this interval if you update your simple content access configuration in Red Hat Subscription Management. This procedure describes how to update the import interval to one hour. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret by using the Search by name field. If it does not exist, click Create Key/value secret to create it. If the secret exists: Click the Options menu , and then click Edit Secret . Click Add Key/Value . In the Key field, enter scaInterval . In the Value field, enter 1h . If the secret does not exist: Click Create Key/value secret . In the Secret name field, enter support . In the Key field, enter scaInterval . In the Value field, enter 1h . Click Create . Note The interval 1h can also be entered as 60m for 60 minutes. 4.8.2. Disabling simple content access import You can disable the importing of simple content access entitlements by using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If the secret exists: Click the Options menu , and then click Edit Secret . Click Add Key/Value . In the Key field, enter scaPullDisabled . In the Value field, enter true . If the secret does not exist: Click Create Key/value secret . In the Secret name field, enter support . In the Key field, enter scaPullDisabled . In the Value field, enter true . Click Create . The simple content access entitlement import is now disabled. Note To enable the simple content access import again, edit the support secret and delete the scaPullDisabled key. 4.8.3. Enabling a previously disabled simple content access import If the importing of simple content access entitlements is disabled, the Insights Operator does not import simple content access entitlements. You can change this behavior. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret by using the Search by name field. Click the Options menu , and then click Edit Secret . For the scaPullDisabled key, set the Value field to false . The simple content access entitlement import is now disabled. | [
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/support/remote-health-monitoring-with-connected-clusters |
Chapter 1. Preparing to install on OpenStack | Chapter 1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. 1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead | [
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/preparing-to-install-on-openstack |
Machine management | Machine management OpenShift Container Platform 4.10 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags",
"spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp",
"spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"providerSpec: value: spotMarketOptions: {}",
"providerSpec: placement: tenancy: dedicated",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> 11 node-role.kubernetes.io/<role>: \"\" 12 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 13 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 18 value: <custom_tag_value> 19 subnet: <infrastructure_id>-<role>-subnet 20 21 userDataSecret: name: worker-user-data 22 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 23 zone: \"1\" 24",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100",
"providerSpec: value: spotVMOptions: {}",
"oc edit machineset <machine-set-name>",
"providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4",
"oc create -f <machine-set-config>.yaml",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE jmywbfb-8zqpx-worker-centralus1 1 1 1 1 15m jmywbfb-8zqpx-worker-centralus2 1 1 1 1 15m jmywbfb-8zqpx-worker-centralus3 1 1 1 1 15m",
"oc edit machineset <machine-set-name>",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"providerSpec: value: preemptible: true",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"providerSpec: value: # disks: - type: # encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3",
"providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: True",
"networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: []",
"openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 guaranteed_memory_mb: <memory_size> 22 os_disk: 23 size_gb: <disk_size> 24 network_interfaces: 25 vnic_profile_id: <vnic_profile_id> 26 credentialsSecret: name: ovirt-credentials 27 kind: OvirtMachineProviderSpec type: <workload_type> 28 auto_pinning_policy: <auto_pinning_policy> 29 hugepages: <hugepages> 30 affinityGroupsNames: - compute 31 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>",
"oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>",
"oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"disableTemplating: false userData: 1 { \"ignition\": { }, }",
"oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A",
"oc get machineset -o yaml",
"oc delete machineset <machineset-name>",
"oc get nodes",
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api",
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: \"0.4\" 16",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags",
"spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp",
"spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> 11 node-role.kubernetes.io/infra: \"\" 12 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 13 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 18 value: <custom_tag_value> 19 subnet: <infrastructure_id>-<role>-subnet 20 21 userDataSecret: name: worker-user-data 22 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 23 zone: \"1\" 24 taints: 25 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 guaranteed_memory_mb: <memory_size> 22 os_disk: 23 size_gb: <disk_size> 24 network_interfaces: 25 vnic_profile_id: <vnic_profile_id> 26 credentialsSecret: name: ovirt-credentials 27 kind: OvirtMachineProviderSpec type: <workload_type> 28 auto_pinning_policy: <auto_pinning_policy> 29 hugepages: <hugepages> 30 affinityGroupsNames: - compute 31 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"ansible-2.9-for-rhel-8-x86_64-rpms\" --enable=\"rhocp-4.10-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.10-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.10-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.10-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"aws cloudformation describe-stacks --stack-name <name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc apply -f healthcheck.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/machine_management/index |
Chapter 3. Preparing for your AMQ Streams deployment | Chapter 3. Preparing for your AMQ Streams deployment This section shows how you prepare for a AMQ Streams deployment, describing: The prerequisites you need before you can deploy AMQ Streams How to download the AMQ Streams release artifacts to use in your deployment How to authenticate with the Red Hat registry for Kafka Connect Source-to-Image (S2I) builds (if required) How to push the AMQ Streams container images into your own registry (if required) How to set up admin roles for configuration of custom resources used in deployment Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 3.1. Deployment prerequisites To deploy AMQ Streams, make sure that: An OpenShift 4.6 and 4.8 cluster is available. AMQ Streams is based on AMQ Streams Strimzi 0.24.x. The oc command-line tool is installed and configured to connect to the running cluster. 3.2. Downloading AMQ Streams release artifacts To install AMQ Streams, download and extract the release artifacts from the amq-streams- <version> -ocp-install-examples.zip file from the AMQ Streams download site . AMQ Streams release artifacts include sample YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster. Use oc to deploy the Cluster Operator from the install/cluster-operator folder of the downloaded ZIP file. For more information about deploying and configuring the Cluster Operator, see Section 5.1.1, "Deploying the Cluster Operator" . In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the AMQ Streams Cluster Operator, you can deploy them from the install/topic-operator and install/user-operator folders. Note Additionally, AMQ Streams container images are available through the Red Hat Ecosystem Catalog . However, we recommend that you use the YAML files provided to deploy AMQ Streams. 3.3. Authenticating with the container registry for Kafka Connect S2I You need to configure authentication with the Red Hat container registry ( registry.redhat.io ) before creating a container image using OpenShift builds and Source-to-Image (S2I) . The container registry is used to store AMQ Streams container images on the Red Hat Ecosystem Catalog . The Catalog contains a Kafka Connect builder image with S2I support. The OpenShift build pulls this builder image, together with your source code and binaries, and uses it to build the new container image. Note Authentication with the Red Hat container registry is only required if using Kafka Connect S2I. It is not required for the other AMQ Streams components. Prerequisites Cluster administrator access to an OpenShift Container Platform cluster. Login details for your Red Hat Customer Portal account. See Appendix A, Using your subscription . Procedure If needed, log in to your OpenShift cluster as an administrator: oc login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443 Open the project that will contain the Kafka Connect S2I cluster: oc project CLUSTER-NAME Note You might have already deployed the Kafka Connect S2I cluster . Create a docker-registry secret using your Red Hat Customer Portal account, replacing PULL-SECRET-NAME with the secret name to create: oc create secret docker-registry PULL-SECRET-NAME \ --docker-server=registry.redhat.io \ --docker-username= CUSTOMER-PORTAL-USERNAME \ --docker-password= CUSTOMER-PORTAL-PASSWORD \ --docker-email= EMAIL-ADDRESS You should see the following output: secret/ PULL-SECRET-NAME created Important You must create this docker-registry secret in every OpenShift project that will authenticate to registry.redhat.io . Link the secret to your service account to use the secret for pulling images. The service account name must match the name that the OpenShift pod uses. oc secrets link SERVICE-ACCOUNT-NAME PULL-SECRET-NAME --for=pull For example, using the default service account and a secret named my-secret : oc secrets link default my-secret --for=pull Link the secret to the builder service account to use the secret for pushing and pulling build images: oc secrets link builder PULL-SECRET-NAME Note If you do not want to use your Red Hat username and password to create the pull secret, you can create an authentication token using a registry service account. Additional resources Section 5.2.3.3, "Creating a container image using OpenShift builds and Source-to-Image" Red Hat Container Registry authentication (Red Hat Knowledgebase) Registry Service Accounts on the Red Hat Customer Portal 3.4. Pushing container images to your own registry Container images for AMQ Streams are available in the Red Hat Ecosystem Catalog . The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Ecosystem Catalog . If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository: Pull all container images listed here Push them into your own registry Update the image names in the installation YAML files Note Each Kafka version supported for the release has a separate image. Container image Namespace/Repository Description Kafka registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 registry.redhat.io/amq7/amq-streams-kafka-27-rhel8:1.8.4 AMQ Streams image for running Kafka, including: Kafka Broker Kafka Connect / S2I Kafka Mirror Maker ZooKeeper TLS Sidecars Operator registry.redhat.io/amq7/amq-streams-rhel8-operator:1.8.4 AMQ Streams image for running the operators: Cluster Operator Topic Operator User Operator Kafka Initializer Kafka Bridge registry.redhat.io/amq7/amq-streams-bridge-rhel8:1.8.4 AMQ Streams image for running the AMQ Streams Kafka Bridge 3.5. Designating AMQ Streams administrators AMQ Streams provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. AMQ Streams provides two cluster roles that you can use to assign these rights to other users: strimzi-view allows users to view and list AMQ Streams resources. strimzi-admin allows users to also create, edit or delete AMQ Streams resources. When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights. The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage AMQ Streams resources. A system administrator can designate AMQ Streams administrators after the Cluster Operator is deployed. Prerequisites The AMQ Streams Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator . Procedure Create the strimzi-view and strimzi-admin cluster roles in OpenShift. oc create -f install/strimzi-admin If needed, assign the roles that provide access rights to users that require them. oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2 | [
"login --user system:admin --token=my-token --server=https://my-cluster.example.com:6443",
"project CLUSTER-NAME",
"create secret docker-registry PULL-SECRET-NAME --docker-server=registry.redhat.io --docker-username= CUSTOMER-PORTAL-USERNAME --docker-password= CUSTOMER-PORTAL-PASSWORD --docker-email= EMAIL-ADDRESS",
"secret/ PULL-SECRET-NAME created",
"secrets link SERVICE-ACCOUNT-NAME PULL-SECRET-NAME --for=pull",
"secrets link default my-secret --for=pull",
"secrets link builder PULL-SECRET-NAME",
"create -f install/strimzi-admin",
"create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-tasks-prereqs_str |
14.3.4. Distributing and Trusting SSH CA Public Keys | 14.3.4. Distributing and Trusting SSH CA Public Keys Hosts that are to allow certificate authenticated log in from users must be configured to trust the CA's public key that was used to sign the user certificates, in order to authenticate user's certificates. In this example that is the ca_user_key.pub . Publish the ca_user_key.pub key and download it to all hosts that are required to allow remote users to log in. Alternately, copy the CA user public key to all the hosts. In a production environment, consider copying the public key to an administrator account first. The secure copy command can be used to copy the public key to remote hosts. The command has the following format: scp ~/.ssh/ca_user_key.pub root@ host_name .example.com:/etc/ssh/ Where host_name is the host name of a server the is required to authenticate user's certificates presented during the login process. Ensure you copy the public key not the private key. For example, as root : For remote user authentication, CA keys can be marked as trusted per-user in the ~/.ssh/authorized_keys file using the cert-authority directive or for global use by means of the TrustedUserCAKeys directive in the /etc/ssh/sshd_config file. For remote host authentication, CA keys can be marked as trusted globally in the /etc/ssh/known_hosts file or per-user in the ~/.ssh/ssh_known_hosts file. Procedure 14.2. Trusting the User Signing Key For user certificates which have one or more principles listed, and where the setting is to have global effect, edit the /etc/ssh/sshd_config file as follows: TrustedUserCAKeys /etc/ssh/ca_user_key.pub Restart sshd to make the changes take effect: To avoid being presented with the warning about an unknown host, a user's system must trust the CA's public key that was used to sign the host certificates. In this example that is ca_host_key.pub . Procedure 14.3. Trusting the Host Signing Key Extract the contents of the public key used to sign the host certificate. For example, on the CA: To configure client systems to trust servers' signed host certificates, add the contents of the ca_host_key.pub into the global known_hosts file. This will automatically check a server's host advertised certificate against the CA public key for all users every time a new machine is connected to in the domain *.example.com . Login as root and configure the /etc/ssh/ssh_known_hosts file, as follows: Where ssh-rsa AAAAB5Wm. is the contents of ca_host_key.pub . The above configures the system to trust the CA servers host public key. This enables global authentication of the certificates presented by hosts to remote users. | [
"~]# scp ~/.ssh/ca_user_key.pub root@host_name.example.com:/etc/ssh/ The authenticity of host 'host_name.example.com (10.34.74.56)' can't be established. RSA key fingerprint is fc:23:ad:ae:10:6f:d1:a1:67:ee:b1:d5:37:d4:b0:2f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'host_name.example.com,10.34.74.56' (RSA) to the list of known hosts. root@host_name.example.com's password: ca_user_key.pub 100% 420 0.4KB/s 00:00",
"~]# service sshd restart",
"cat ~/.ssh/ca_host_key.pub ssh-rsa AAAAB5Wm. == [email protected]",
"~]# vi /etc/ssh/ssh_known_hosts A CA key, accepted for any host in *.example.com @cert-authority *.example.com ssh-rsa AAAAB5Wm."
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Distributing_and_Trusting_SSH_CA_Public_Keys |
Applications | Applications OpenShift Container Platform 4.7 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"oc status",
"oc delete project <project_name>",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: sample-db-operators namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/redhat-developer/sample-db-operators-olm:v1 displayName: Sample DB OLM registry updateStrategy: registryPoll: interval: 30m",
"apiVersion: postgresql.baiju.dev/v1alpha1 kind: Database metadata: name: db-demo spec: image: docker.io/postgres imageName: postgres dbName: db-demo",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"triggers: - type: \"ConfigChange\"",
"triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"cat gpu-quota.yaml",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: default data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod health-check",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"k8s.gcr.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"k8s.gcr.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"k8s.gcr.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 244.116568ms",
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>",
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1beta1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n N -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"'\"sha:abz\" USD'\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/applications/index |
Release Notes and Known Issues | Release Notes and Known Issues Red Hat CodeReady Workspaces 2.1 Release Notes and Known Issues for Red Hat CodeReady Workspaces 2.1 Robert Kratky [email protected] Michal Maler [email protected] Fabrice Flore-Thebault [email protected] Yana Hontyk [email protected] Red Hat Developer Group Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/release_notes_and_known_issues/index |
Chapter 6. Installation configuration parameters for bare metal | Chapter 6. Installation configuration parameters for bare metal Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 6.1. Available installation configuration parameters for bare metal The following tables specify the required, optional, and bare metal-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Additional resources OVN-Kubernetes IPv6 and dual-stack limitations | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_bare_metal/installation-config-parameters-bare-metal |
Chapter 5. Managing Red Hat Gluster Storage Servers and Volumes using Red Hat Virtualization Manager | Chapter 5. Managing Red Hat Gluster Storage Servers and Volumes using Red Hat Virtualization Manager You can create and configure Red Hat Gluster Storage volumes using Red Hat Virtualization Manager 3.3 or later by creating a separate cluster with the Enable Gluster Service option enabled. Note Red Hat Gluster Storage nodes must be managed in a separate cluster to Red Hat Virtualization hosts. If you want to configure combined management of virtualization hosts and storage servers, see the Red Hat Hyperconverged Infrastructure documentation: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.0/html/deploying_red_hat_hyperconverged_infrastructure/ A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Most of the management operations for Red Hat Gluster Storage happen on these volumes. You can use Red Hat Virtualization Manager to create and start new volumes featuring a single global namespace. Note With the exception of the volume operations described in this section, all other Red Hat Gluster Storage functionalities must be executed from the command line. 5.1. Creating a Data Center Select the Data Centers resource tab to list all data centers in the results list. Click the New button to open the New Data Center window. Figure 5.1. New Data Center Window Enter the Name and Description of the data center. Set Type to Shared from the drop-down menu. Set Quota Mode as Disabled . Click OK . The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage are configured. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-managing_red_hat_storage_servers_and_volumes_using_red_hat_enterprise_virtualization_manager |
Chapter 9. Red Hat Enterprise Linux CoreOS (RHCOS) | Chapter 9. Red Hat Enterprise Linux CoreOS (RHCOS) 9.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.14 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.14: If you install your cluster on infrastructure that the installation program provisions, RHCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the RHCOS configuration, are also downloaded and used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 9.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. CRI-O can use either the runC or crun container runtime to start and manage containers. For information about how to enable crun, see the documentation for creating a ContainerRuntimeConfig CR. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd . Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 9.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer preconfigured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 9.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line identify the type of deployment and the location of the Ignition-enabled initial RAM disk (initramfs). 9.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially configured machine. If a machine setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. 9.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is ready to join the cluster and does not require a reboot. 9.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 9.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster. | [
"openshift-install create ignition-configs --dir USDHOME/testconfig",
"cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },",
"echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode",
"This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service",
"\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",",
"USD oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m",
"oc describe machineconfigs 01-worker-container-runtime | grep Path:",
"Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/architecture/architecture-rhcos |
Chapter 14. Enabling and disabling features | Chapter 14. Enabling and disabling features Red Hat build of Keycloak has packed some functionality in features, including some disabled features, such as Technology Preview and deprecated features. Other features are enabled by default, but you can disable them if they do not apply to your use of Red Hat build of Keycloak. 14.1. Enabling features Some supported features, and all preview features, are disabled by default. To enable a feature, enter this command: bin/kc.[sh|bat] build --features="<name>[,<name>]" For example, to enable docker and token-exchange , enter this command: bin/kc.[sh|bat] build --features="docker,token-exchange" To enable all preview features, enter this command: bin/kc.[sh|bat] build --features="preview" Enabled feature may be versioned, or unversioned. If you use a versioned feature name, e.g. feature:v1, that exact feature version will be enabled as long as it still exists in the runtime. If you instead use an unversioned name, e.g. just feature, the selection of the particular supported feature version may change from release to release according to the following precedence: The highest default supported version The highest non-default supported version The highest deprecated version The highest preview version The highest experimental version 14.2. Disabling features To disable a feature that is enabled by default, enter this command: bin/kc.[sh|bat] build --features-disabled="<name>[,<name>]" For example to disable impersonation , enter this command: bin/kc.[sh|bat] build --features-disabled="impersonation" It is not allowed to have a feature in both the features-disabled list and the features list. When a feature is disabled all versions of that feature are disabled. 14.3. Supported features The following list contains supported features that are enabled by default, and can be disabled if not needed. account-api Account Management REST API account-v3 Account Console version 3 admin-api Admin API admin-v2 New Admin Console authorization Authorization Service ciba OpenID Connect Client Initiated Backchannel Authentication (CIBA) client-policies Client configuration policies device-flow OAuth 2.0 Device Authorization Grant hostname-v2 Hostname Options V2 impersonation Ability for admins to impersonate users kerberos Kerberos login-v2 New Login Theme organization Organization support within realms par OAuth 2.0 Pushed Authorization Requests (PAR) persistent-user-sessions Persistent online user sessions across restarts and upgrades step-up-authentication Step-up Authentication web-authn W3C Web Authentication (WebAuthn) 14.3.1. Disabled by default The following list contains supported features that are disabled by default, and can be enabled if needed. docker Docker Registry protocol fips FIPS 140-2 mode multi-site Multi-site support 14.4. Preview features Preview features are disabled by default and are not recommended for use in production. These features may change or be removed at a future release. admin-fine-grained-authz Fine-Grained Admin Permissions client-secret-rotation Client Secret Rotation dpop OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer opentelemetry OpenTelemetry Tracing passkeys Passkeys recovery-codes Recovery codes scripts Write custom authenticators using JavaScript token-exchange Token Exchange Service update-email Update Email Action 14.5. Deprecated features The following list contains deprecated features that will be removed in a future release. These features are disabled by default. login-v1 Legacy Login Theme 14.6. Relevant options Value features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api[:v1] , account[:v3] , admin-api[:v1] , admin-fine-grained-authz[:v1] , admin[:v2] , authorization[:v1] , cache-embedded-remote-store[:v1] , ciba[:v1] , client-policies[:v1] , client-secret-rotation[:v1] , client-types[:v1] , clusterless[:v1] , declarative-ui[:v1] , device-flow[:v1] , docker[:v1] , dpop[:v1] , dynamic-scopes[:v1] , fips[:v1] , hostname[:v2] , impersonation[:v1] , kerberos[:v1] , login[:v2,v1] , multi-site[:v1] , oid4vc-vci[:v1] , opentelemetry[:v1] , organization[:v1] , par[:v1] , passkeys[:v1] , persistent-user-sessions[:v1] , preview , recovery-codes[:v1] , scripts[:v1] , step-up-authentication[:v1] , token-exchange[:v1] , transient-users[:v1] , update-email[:v1] , web-authn[:v1] features-disabled 🛠 Disables a set of one or more features. CLI: --features-disabled Env: KC_FEATURES_DISABLED account , account-api , admin , admin-api , admin-fine-grained-authz , authorization , cache-embedded-remote-store , ciba , client-policies , client-secret-rotation , client-types , clusterless , declarative-ui , device-flow , docker , dpop , dynamic-scopes , fips , impersonation , kerberos , login , multi-site , oid4vc-vci , opentelemetry , organization , par , passkeys , persistent-user-sessions , preview , recovery-codes , scripts , step-up-authentication , token-exchange , transient-users , update-email , web-authn | [
"bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features=\"docker,token-exchange\"",
"bin/kc.[sh|bat] build --features=\"preview\"",
"bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features-disabled=\"impersonation\""
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/features- |
Chapter 4. OADP Application backup and restore | Chapter 4. OADP Application backup and restore 4.1. Introduction to OpenShift API for Data Protection The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or {OCP-short} Operators. OADP support is provided to customer workload namespaces, and cluster scope resources. Full cluster backup and restore are not supported. 4.1.1. OpenShift API for Data Protection APIs OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources. OADP provides the following APIs: Backup Restore Schedule BackupStorageLocation VolumeSnapshotLocation 4.1.1.1. Support for OpenShift API for Data Protection Table 4.1. Supported versions of OADP Version OCP version General availability Full support ends Maintenance ends Extended Update Support (EUS) Extended Update Support Term 2 (EUS Term 2) 1.4 4.14 4.15 4.16 4.17 10 Jul 2024 Release of 1.5 Release of 1.6 27 Jun 2026 EUS must be on OCP 4.16 27 Jun 2027 EUS Term 2 must be on OCP 4.16 1.3 4.12 4.13 4.14 4.15 29 Nov 2023 10 Jul 2024 Release of 1.5 31 Oct 2025 EUS must be on OCP 4.14 31 Oct 2026 EUS Term 2 must be on OCP 4.14 4.1.1.1.1. Unsupported versions of the OADP Operator Table 4.2. versions of the OADP Operator which are no longer supported Version General availability Full support ended Maintenance ended 1.2 14 Jun 2023 29 Nov 2023 10 Jul 2024 1.1 01 Sep 2022 14 Jun 2023 29 Nov 2023 1.0 09 Feb 2022 01 Sep 2022 14 Jun 2023 For more details about EUS, see Extended Update Support . For more details about EUS Term 2, see Extended Update Support Term 2 . Additional resources Backing up etcd 4.2. OADP release notes 4.2.1. OADP 1.4 release notes The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. Note For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs 4.2.1.1. OADP 1.4.3 release notes The OpenShift API for Data Protection (OADP) 1.4.3 release notes lists the following new feature. 4.2.1.1.1. New features Notable changes in the kubevirt velero plugin in version 0.7.1 With this release, the kubevirt velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. Object graphs now include all extra objects during backup and restore operations. Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. Switching VM run strategies during restore operations is now possible. Clearing a MAC address by label is now supported. The restore-specific checks during the backup operation are now skipped. The VirtualMachineClusterInstancetype and VirtualMachineClusterPreference custom resource definitions (CRDs) are now supported. 4.2.1.2. OADP 1.4.2 release notes The OpenShift API for Data Protection (OADP) 1.4.2 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.2.1. New features Backing up different volumes in the same namespace by using the VolumePolicy feature is now possible With this release, Velero provides resource policies to back up different volumes in the same namespace by using the VolumePolicy feature. The supported VolumePolicy feature to back up different volumes includes skip , snapshot , and fs-backup actions. OADP-1071 File system backup and data mover can now use short-term credentials File system backup and data mover can now use short-term credentials such as AWS Security Token Service (STS) and GCP WIF. With this support, backup is successfully completed without any PartiallyFailed status. OADP-5095 4.2.1.2.2. Resolved issues DPA now reports errors if VSL contains an incorrect provider value Previously, if the provider of a Volume Snapshot Location (VSL) spec was incorrect, the Data Protection Application (DPA) reconciled successfully. With this update, DPA reports errors and requests for a valid provider value. OADP-5044 Data Mover restore is successful irrespective of using different OADP namespaces for backup and restore Previously, when backup operation was executed by using OADP installed in one namespace but was restored by using OADP installed in a different namespace, the Data Mover restore failed. With this update, Data Mover restore is now successful. OADP-5460 SSE-C backup works with the calculated MD5 of the secret key Previously, backup failed with the following error: Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key. With this update, missing Server-Side Encryption with Customer-Provided Keys (SSE-C) base64 and MD5 hash are now fixed. As a result, SSE-C backup works with the calculated MD5 of the secret key. In addition, incorrect errorhandling for the customerKey size is also fixed. OADP-5388 For a complete list of all issues resolved in this release, see the list of OADP 1.4.2 resolved issues in Jira. 4.2.1.2.3. Known issues The nodeSelector spec is not supported for the Data Mover restore action When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. OADP-5260 The S3 storage does not use proxy environment when TLS skip verify is specified In the image registry backup, the S3 storage does not use the proxy environment when the insecureSkipTLSVerify parameter is set to true . OADP-3143 Kopia does not delete artifacts after backup expiration Even after you delete a backup, Kopia does not delete the volume artifacts from the USD{bucket_name}/kopia/USDopenshift-adp on the S3 location after backup expired. For more information, see "About Kopia repository maintenance". OADP-5131 Additional resources About Kopia repository maintenance 4.2.1.3. OADP 1.4.1 release notes The OpenShift API for Data Protection (OADP) 1.4.1 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.3.1. New features New DPA fields to update client qps and burst You can now change Velero Server Kubernetes API queries per second and burst values by using the new Data Protection Application (DPA) fields. The new DPA fields are spec.configuration.velero.client-qps and spec.configuration.velero.client-burst , which both default to 100. OADP-4076 Enabling non-default algorithms with Kopia With this update, you can now configure the hash, encryption, and splitter algorithms in Kopia to select non-default options to optimize performance for different backup workloads. To configure these algorithms, set the env variable of a velero pod in the podConfig section of the DataProtectionApplication (DPA) configuration. If this variable is not set, or an unsupported algorithm is chosen, Kopia will default to its standard algorithms. OADP-4640 4.2.1.3.2. Resolved issues Restoring a backup without pods is now successful Previously, restoring a backup without pods and having StorageClass VolumeBindingMode set as WaitForFirstConsumer , resulted in the PartiallyFailed status with an error: fail to patch dynamic PV, err: context deadline exceeded . With this update, patching dynamic PV is skipped and restoring a backup is successful without any PartiallyFailed status. OADP-4231 PodVolumeBackup CR now displays correct message Previously, the PodVolumeBackup custom resource (CR) generated an incorrect message, which was: get a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed" . With this update, the message produced is now: found a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed". OADP-4224 Overriding imagePullPolicy is now possible with DPA Previously, OADP set the imagePullPolicy parameter to Always for all images. With this update, OADP checks if each image contains sha256 or sha512 digest, then it sets imagePullPolicy to IfNotPresent ; otherwise imagePullPolicy is set to Always . You can now override this policy by using the new spec.containerImagePullPolicy DPA field. OADP-4172 OADP Velero can now retry updating the restore status if initial update fails Previously, OADP Velero failed to update the restored CR status. This left the status at InProgress indefinitely. Components which relied on the backup and restore CR status to determine the completion would fail. With this update, the restore CR status for a restore correctly proceeds to the Completed or Failed status. OADP-3227 Restoring BuildConfig Build from a different cluster is successful without any errors Previously, when performing a restore of the BuildConfig Build resource from a different cluster, the application generated an error on TLS verification to the internal image registry. The resulting error was failed to verify certificate: x509: certificate signed by unknown authority error. With this update, the restore of the BuildConfig build resources to a different cluster can proceed successfully without generating the failed to verify certificate error. OADP-4692 Restoring an empty PVC is successful Previously, downloading data failed while restoring an empty persistent volume claim (PVC). It failed with the following error: data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found With this update, the downloading of data proceeds to correct conclusion when restoring an empty PVC and the error message is not generated. OADP-3106 There is no Velero memory leak in CSI and DataMover plugins Previously, a Velero memory leak was caused by using the CSI and DataMover plugins. When the backup ended, the Velero plugin instance was not deleted and the memory leak consumed memory until an Out of Memory (OOM) condition was generated in the Velero pod. With this update, there is no resulting Velero memory leak when using the CSI and DataMover plugins. OADP-4448 Post-hook operation does not start before the related PVs are released Previously, due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the Data Mover persistent volume claim (PVC) releases the persistent volumes (PVs) of the related pods. This problem would cause the backup to fail with a PartiallyFailed status. With this update, the post-hook operation is not started until the related PVs are released by the Data Mover PVC, eliminating the PartiallyFailed backup status. OADP-3140 Deploying a DPA works as expected in namespaces with more than 37 characters When you install the OADP Operator in a namespace with more than 37 characters to create a new DPA, labeling the "cloud-credentials" Secret fails and the DPA reports the following error: With this update, creating a DPA does not fail in namespaces with more than 37 characters in the name. OADP-3960 Restore is successfully completed by overriding the timeout error Previously, in a large scale environment, the restore operation would result in a Partiallyfailed status with the error: fail to patch dynamic PV, err: context deadline exceeded . With this update, the resourceTimeout Velero server argument is used to override this timeout error resulting in a successful restore. OADP-4344 For a complete list of all issues resolved in this release, see the list of OADP 1.4.1 resolved issues in Jira. 4.2.1.3.3. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning the error CrashLoopBackoff state after restoring OADP. The StatefulSet controller then recreates these pods and it runs normally. OADP-4407 Deployment referencing ImageStream is not restored properly leading to corrupted pod and volume contents During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. During the restore operation, the OpenShift Container Platform controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB along with the post-hook. For more information about image stream trigger, see Triggering updates on image stream changes . The workaround for this behavior is a two-step restore process: Perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps Once the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps OADP-3954 4.2.1.4. OADP 1.4.0 release notes The OpenShift API for Data Protection (OADP) 1.4.0 release notes lists resolved issues and known issues. 4.2.1.4.1. Resolved issues Restore works correctly in OpenShift Container Platform 4.16 Previously, while restoring the deleted application namespace, the restore operation partially failed with the resource name may not be empty error in OpenShift Container Platform 4.16. With this update, restore works as expected in OpenShift Container Platform 4.16. OADP-4075 Data Mover backups work properly in the OpenShift Container Platform 4.16 cluster Previously, Velero was using the earlier version of SDK where the Spec.SourceVolumeMode field did not exist. As a consequence, Data Mover backups failed in the OpenShift Container Platform 4.16 cluster on the external snapshotter with version 4.2. With this update, external snapshotter is upgraded to version 7.0 and later. As a result, backups do not fail in the OpenShift Container Platform 4.16 cluster. OADP-3922 For a complete list of all issues resolved in this release, see the list of OADP 1.4.0 resolved issues in Jira. 4.2.1.4.2. Known issues Backup fails when checksumAlgorithm is not set for MCG While performing a backup of any application with Noobaa as the backup location, if the checksumAlgorithm configuration parameter is not set, backup fails. To fix this problem, if you do not provide a value for checksumAlgorithm in the Backup Storage Location (BSL) configuration, an empty value is added. The empty value is only added for BSLs that are created using Data Protection Application (DPA) custom resource (CR), and this value is not added if BSLs are created using any other method. OADP-4274 For a complete list of all known issues in this release, see the list of OADP 1.4.0 known issues in Jira. 4.2.1.4.3. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.1.4.3.1. Changes from OADP 1.3 to 1.4 The Velero server has been updated from version 1.12 to 1.14. Note that there are no changes in the Data Protection Application (DPA). This changes the following: The velero-plugin-for-csi code is now available in the Velero code, which means an init container is no longer required for the plugin. Velero changed client Burst and QPS defaults from 30 and 20 to 100 and 100, respectively. The velero-plugin-for-aws plugin updated default value of the spec.config.checksumAlgorithm field in BackupStorageLocation objects (BSLs) from "" (no checksum calculation) to the CRC32 algorithm. For more information, see Velero plugins for AWS Backup Storage Location . The checksum algorithm types are known to work only with AWS. Several S3 providers require the md5sum to be disabled by setting the checksum algorithm to "" . Confirm md5sum algorithm support and configuration with your storage provider. In OADP 1.4, the default value for BSLs created within DPA for this configuration is "" . This default value means that the md5sum is not checked, which is consistent with OADP 1.3. For BSLs created within DPA, update it by using the spec.backupLocations[].velero.config.checksumAlgorithm field in the DPA. If your BSLs are created outside DPA, you can update this configuration by using spec.config.checksumAlgorithm in the BSLs. 4.2.1.4.3.2. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example command USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.1.4.3.3. Upgrading the OADP Operator Use the following procedure when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.3 to stable-1.4 . Wait for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.1.4.4. Converting DPA to the new version To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required. 4.2.1.4.5. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.2.2. OADP 1.3 release notes The release notes for OpenShift API for Data Protection (OADP) 1.3 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. 4.2.2.1. OADP 1.3.6 release notes OpenShift API for Data Protection (OADP) 1.3.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.5. 4.2.2.2. OADP 1.3.5 release notes OpenShift API for Data Protection (OADP) 1.3.5 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.4. 4.2.2.3. OADP 1.3.4 release notes The OpenShift API for Data Protection (OADP) 1.3.4 release notes list resolved issues and known issues. 4.2.2.3.1. Resolved issues The backup spec.resourcepolicy.kind parameter is now case-insensitive Previously, the backup spec.resourcepolicy.kind parameter was only supported with a lower-level string. With this fix, it is now case-insensitive. OADP-2944 Use olm.maxOpenShiftVersion to prevent cluster upgrade to OCP 4.16 version The cluster operator-lifecycle-manager operator must not be upgraded between minor OpenShift Container Platform versions. Using the olm.maxOpenShiftVersion parameter prevents upgrading to OpenShift Container Platform 4.16 version when OADP 1.3 is installed. To upgrade to OpenShift Container Platform 4.16 version, upgrade OADP 1.3 on OCP 4.15 version to OADP 1.4. OADP-4803 BSL and VSL are removed from the cluster Previously, when any Data Protection Application (DPA) was modified to remove the Backup Storage Locations (BSL) or Volume Snapshot Locations (VSL) from the backupLocations or snapshotLocations section, BSL or VSL were not removed from the cluster until the DPA was deleted. With this update, BSL/VSL are removed from the cluster. OADP-3050 DPA reconciles and validates the secret key Previously, the Data Protection Application (DPA) reconciled successfully on the wrong Volume Snapshot Locations (VSL) secret key name. With this update, DPA validates the secret key name before reconciling on any VSL. OADP-3052 Velero's cloud credential permissions are now restrictive Previously, Velero's cloud credential permissions were mounted with the 0644 permissions. As a consequence, any one could read the /credentials/cloud file apart from the owner and group making it easier to access sensitive information such as storage access keys. With this update, the permissions of this file are updated to 0640, and this file cannot be accessed by other users except the owner and group. Warning is displayed when ArgoCD managed namespace is included in the backup A warning is displayed during the backup operation when ArgoCD and Velero manage the same namespace. OADP-4736 The list of security fixes that are included in this release is documented in the RHSA-2024:9960 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.4 resolved issues in Jira. 4.2.2.3.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restore After OADP restores, the Cassandra application pods might enter the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 defaultVolumesToFSBackup and defaultVolumesToFsBackup flags are not identical The dpa.spec.configuration.velero.defaultVolumesToFSBackup flag is not identical to the backup.spec.defaultVolumesToFsBackup flag, which can lead to confusion. OADP-3692 PodVolumeRestore works even though the restore is marked as failed The podvolumerestore continues the data transfer even though the restore is marked as failed. OADP-3039 Velero is unable to skip restoring of initContainer spec Velero might restore the restore-wait init container even though it is not required. OADP-3759 4.2.2.4. OADP 1.3.3 release notes The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues. 4.2.2.4.1. Resolved issues OADP fails when its namespace name is longer than 37 characters When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials secret fails. With this release, the issue has been fixed. OADP-4211 OADP image PullPolicy set to Always In versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always . This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager and Velero pods is set to IfNotPresent . The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira. 4.2.2.4.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.5. OADP 1.3.2 release notes The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues. 4.2.2.5.1. Resolved issues DPA fails to reconcile if a valid custom secret is used for BSL DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials initially. When the custom secret is re-created, it can be used and checked for its existence. OADP-3193 CVE-2023-45290: oadp-velero-container : Golang net/http : Memory exhaustion in Request.ParseMultipartForm A flaw was found in the net/http Golang standard library package, which impacts versions of OADP. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile , limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45290 . CVE-2023-45289: oadp-velero-container : Golang net/http/cookiejar : Incorrect forwarding of sensitive headers and cookies on HTTP redirect A flaw was found in the net/http/cookiejar Golang standard library package, which impacts versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as Authorization or Cookie . A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45289 . CVE-2024-24783: oadp-velero-container : Golang crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24783 . CVE-2024-24784: oadp-velero-plugin-container : Golang net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts versions of OADP. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24784 . CVE-2024-24785: oadp-velero-container : Golang: html/template: errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts versions of OADP. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24785 . For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira. 4.2.2.5.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.6. OADP 1.3.1 release notes The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues. 4.2.2.6.1. New features OADP 1.3.0 Data Mover is now fully supported The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. 4.2.2.6.2. Resolved issues IBM Cloud(R) Object Storage is now supported as a backup storage provider IBM Cloud(R) Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud(R) Object Storage is now supported as an AWS S3 compatible backup storage provider. OADP-3788 OADP operator now correctly reports the missing region error Previously, when you specified profile:default without specifying the region in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region error. OADP-3044 Custom labels are not removed from the openshift-adp namespace Previously, the openshift-adp-controller-manager pod would reset the labels attached to the openshift-adp namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp namespace. OADP-3189 OADP must-gather image collects CRDs Previously, the OADP must-gather image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg tool to extract data in the support shell. With this fix, the must-gather image now collects CRDs shipped by OADP and can use the omg tool to extract data. OADP-3229 Garbage collection has the correct description for the default frequency value Previously, the garbage-collection-frequency field had a wrong description for the default frequency value. With this update, garbage-collection-frequency has a correct value of one hour for the gc-controller reconciliation default frequency. OADP-3486 FIPS Mode flag is available in OperatorHub By setting the fips-compliant flag to true , the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled. OADP-3495 CSI plugin does not panic with a nil pointer when csiSnapshotTimeout is set to a short duration Previously, when the csiSnapshotTimeout parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot . OADP-3069 For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira. 4.2.2.6.3. Known issues Backup and storage restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power(R) and IBM Z(R) platforms: Storage Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Backup Only the backing up applications with File System Backup such as kopia and restic are supported for backup and restore operations. OADP-3787 Cassandra application pods enter in the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods with any error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.7. OADP 1.3.0 release notes The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues. 4.2.2.7.1. New features Velero built-in DataMover Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository. Backing up applications with File System Backup: Kopia or Restic Velero's File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path. Velero allows users to select between the two paths. For backup, specify the path during the installation through the uploader-type flag. The valid value is either restic or kopia . This field defaults to kopia if the value is not specified. The selection cannot be changed after the installation. GCP Cloud authentication Google Cloud Platform (GCP) authentication enables you to use short-lived Google credentials. GCP with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys. AWS ROSA STS authentication You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools. 4.2.2.7.2. Resolved issues ACM applications were removed and re-created on managed clusters after restore Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686 Restic restore was partially failing due to Pod Security standard During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce , which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688 Possible pod volume backup failure if Velero is installed in several namespaces There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308 OADP Velero plugins returning "received EOF, stopping recv loop" message In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176 CVE-2023-39325 Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) In releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption. For more information, see CVE-2023-39325 (Rapid Reset Attack) For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira. 4.2.2.7.3. Known issues CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration The CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . Backup is marked as PartiallyFailed when volumeSnapshotContent CR has an error If any of the VolumeSnapshotContent CRs have an error related to removing the VolumeSnapshotBeingCreated annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed phase. OADP-2871 Performance issues when restoring 30,000 resources for the first time When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update . OADP-3071 Post restore hooks might start running before Datadownload operation has released the related PV Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC). GCP-Workload Identity Federation VSL backup PartiallyFailed VSL backup PartiallyFailed when GCP workload identity is configured on GCP. For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira. 4.2.2.7.4. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.2.7.4.1. Changes from OADP 1.2 to 1.3 The Velero server has been updated from version 1.11 to 1.12. OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover. This changes the following: The spec.features.dataMover field and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from the DataProtectionApplication (DPA) configuration. The Volsync Operator is no longer required for Data Mover functionality, and you can remove it. The custom resource definitions volumesnapshotbackups.datamover.oadp.openshift.io and volumesnapshotrestores.datamover.oadp.openshift.io are no longer required, and you can remove them. The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them. OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic. To employ Kopia, use the new spec.configuration.nodeAgent field as shown in the following example: Example spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... The spec.configuration.restic field is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove the restic key and its values, and use the following new syntax: Example spec: configuration: nodeAgent: enable: true uploaderType: restic # ... Note In a future OADP release, it is planned that the kopia tool will become the default uploaderType value. 4.2.2.7.4.2. Upgrading from OADP 1.2 Technology Preview Data Mover OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3: Procedure If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup. If you require off cluster backups: Back up the applications with a file system backup that uses the --default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackup options. Back up the applications with your object storage plugins, for example, velero-plugin-for-aws . Note The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours. Important To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2. 4.2.2.7.4.3. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.2.7.4.4. Upgrading the OADP Operator Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.2 to stable-1.3 . Allow time for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.2.7.4.5. Converting DPA to the new version If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication (DPA) manifest as follows. Procedure Click Operators Installed Operators and select the OADP Operator. In the Provided APIs section, click View more . Click Create instance in the DataProtectionApplication box. Click YAML View to display the current DPA parameters. Example current DPA spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift # ... Update the DPA parameters: Remove the features.dataMover key and values from the DPA. Remove the VolumeSnapshotMover (VSM) plugin. Add the nodeAgent key and values. Example updated DPA spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift # ... Wait for the DPA to reconcile successfully. 4.2.2.7.4.6. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication (DPA) configuration. Example USD velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true Example apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s # ... 4.3. OADP performance 4.3.1. OADP recommended network settings For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across {OCP-short} nodes, S3 storage, and in supported cloud environments that meet {OCP-short} network requirement recommendations. To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions: Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups. Packet loss: 1% Packet corruption: 1% Latency: 100ms Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements. Important Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds. 4.4. OADP features and plugins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.4.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.4.2. OADP plugins The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins. OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots. Table 4.3. OADP plugins OADP plugin Function Storage location aws Backs up and restores Kubernetes objects. AWS S3 Backs up and restores volumes with snapshots. AWS EBS azure Backs up and restores Kubernetes objects. Microsoft Azure Blob storage Backs up and restores volumes with snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects. Google Cloud Storage Backs up and restores volumes with snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources. [1] Object store kubevirt Backs up and restores OpenShift Virtualization resources. [2] Object store csi Backs up and restores volumes with CSI snapshots. [3] Cloud storage that supports CSI snapshots vsm VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] Object store Mandatory. Virtual machine disks are backed up with CSI snapshots or Restic. The csi plugin uses the Kubernetes CSI snapshot API. OADP 1.1 or later uses snapshot.storage.k8s.io/v1 OADP 1.0 uses snapshot.storage.k8s.io/v1beta1 OADP 1.2 only. 4.4.3. About OADP Velero plugins You can configure two types of plugins when you install Velero: Default cloud provider plugins Custom plugins Both types of plugin are optional, but most users configure at least one cloud provider plugin. 4.4.3.1. Default Velero cloud provider plugins You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plugin) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plugins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.4.3.2. Custom Velero plugins You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. 4.4.4. Supported architectures for OADP OpenShift API for Data Protection (OADP) supports the following architectures: AMD64 ARM64 PPC64le s390x Note OADP 1.2.0 and later versions support the ARM64 architecture. 4.4.5. OADP support for IBM Power and IBM Z OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power(R) and to IBM Z(R). OADP 1.1.7 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.1.7 in terms of backup locations for these systems. OADP 1.2.3 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.2.3 in terms of backup locations for these systems. OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems. OADP 1.4.2 was tested successfully against OpenShift Container Platform 4.14, 4.15, and 4.16 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.4.2 in terms of backup locations for these systems. 4.4.5.1. OADP support for target backup locations using IBM Power IBM Power(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.12, 4.13. 4.14, and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.14, 4.15, and 4.16, and OADP 1.4.2 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.14, 4.15, and 4.16, and OADP 1.4.2 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2. OADP testing and support for target backup locations using IBM Z IBM Z(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.12, 4.13, 4.14 and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.14, 4.15, and 4.16, and 1.4.2 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.14, 4.15, and 4.16, and 1.4.2 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue. 4.4.6. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.4.6.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.4.6.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.5. OADP use cases 4.5.1. Backup using OpenShift API for Data Protection and Red Hat OpenShift Data Foundation (ODF) Following is a use case for using OADP and ODF to back up an application. 4.5.1.1. Backing up an application using OADP and ODF In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF). You create a object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location. You use the NooBaa MCG service with OADP by using the aws provider plugin. You configure the Data Protection Application (DPA) with the backup storage location (BSL). You create a backup custom resource (CR) and specify the application namespace to back up. You create and verify the backup. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example: Example OBC apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 The name of the object bucket claim. 2 The name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> 1 1 Specify the file name of the object bucket claim manifest. When you create an OBC, ODF creates a secret and a config map with the same name as the object bucket claim. The secret has the bucket credentials, and the config map has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 test-obc is the name of the OBC. Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the generated secret , run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Get the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace by running the following command: USD oc get route s3 -n openshift-storage Create a cloud-credentials file with the object bucket credentials as shown in the following command: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content as shown in the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Configure the Data Protection Application (DPA) as shown in the following example: Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp 1 Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. 2 This is the S3 URL of ODF storage. 3 Specify the bucket name. Create the DPA by running the following command: USD oc apply -f <dpa_filename> Verify that the DPA is created successfully by running the following command. In the example output, you can see the status object has type field set to Reconciled . This means, the DPA is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.2. OpenShift API for Data Protection (OADP) restore use case Following is a use case for using OADP to restore a backup to a different namespace. 4.5.2.1. Restoring an application to a different namespace using OADP Restore a backup of an application by using OADP to a new target namespace, test-restore-application . To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources. Prerequisites You installed the OADP Operator. You have the backup of an application to be restored. Procedure Create a restore CR as shown in the following example: Example restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3 1 The name of the restore CR. 2 Specify the name of the backup. 3 namespaceMapping maps the source application namespace to the target application namespace. Specify the application namespace that you backed up. test-restore-application is the target namespace where you want to restore the backup. Apply the restore CR by running the following command: USD oc apply -f <restore_cr_filename> Verification Verify that the restore is in the Completed phase by running the following command: USD oc describe restores.velero.io <restore_name> -n openshift-adp Change to the restored namespace test-restore-application by running the following command: USD oc project test-restore-application Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command: USD oc get pvc,svc,deployment,secret,configmap Example output NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s 4.5.3. Including a self-signed CA certificate during backup You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF). 4.5.3.1. Backing up an application and its self-signed CA certificate The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA. To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks: Request a NooBaa bucket by creating an object bucket claim (OBC). Extract the bucket details. Include a self-signed CA certificate in the DataProtectionApplication CR. Back up an application. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest to request a NooBaa bucket as shown in the following example: Example ObjectBucketClaim CR apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 Specifies the name of the object bucket claim. 2 Specifies the name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> When you create an OBC, ODF creates a secret and a ConfigMap with the same name as the object bucket claim. The secret object contains the bucket credentials, and the ConfigMap object contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 The name of the OBC is test-obc . Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the secret object, run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Create a cloud-credentials file with the object bucket credentials by using the following example configuration: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content by running the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Extract the service CA certificate from the openshift-service-ca.crt config map by running the following command. Ensure that you encode the certificate in Base64 format and note the value to use in the step. USD oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo Example output LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K Configure the DataProtectionApplication CR manifest file with the bucket name and CA certificate as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "false" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3 1 The insecureSkipTLSVerify flag can be set to either true or false . If set to "true", SSL/TLS security is disabled. If set to false , SSL/TLS security is enabled. 2 Specify the name of the bucket extracted in an earlier step. 3 Copy and paste the Base64 encoded certificate from the step. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command: USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure the Backup CR by using the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the Backup object is in the Completed phase by running the following command: USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.4. Using the legacy-aws Velero plugin If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups. 4.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application. Note Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins . Prerequisites You have installed the OADP Operator. You have configured an AWS S3-compatible object storage as a backup location. You have an application with a database running in a separate namespace. Procedure Configure the DataProtectionApplication CR to use the legacy-aws Velero plugin as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp 1 Use the legacy-aws plugin. 2 Specify the bucket name. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command. In the example output, you can see the status object has the type field set to Reconciled and the status field set to "True" . That status indicates that the DataProtectionApplication CR is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a Backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backups.velero.io test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.6. Installing and configuring OADP 4.6.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway IBM Cloud(R) Object Storage S3 AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO You can configure multiple backup storage locations within the same namespace for each individual OADP deployment. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB). To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage. You create a default Secret and then you install the Data Protection Application. 4.6.1.1. AWS S3 compatible backup storage providers OADP is compatible with many object storage providers for use with different backup and snapshot operations. Several object storage providers are fully supported, several are unsupported but known to work, and some have known limitations. 4.6.1.1.1. Supported backup storage providers The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations: MinIO Multicloud Object Gateway (MCG) Amazon Web Services (AWS) S3 IBM Cloud(R) Object Storage S3 Ceph RADOS Gateway (Ceph Object Gateway) Red Hat Container Storage Red Hat OpenShift Data Foundation Google Cloud Platform (GCP) Microsoft Azure Note Google Cloud Platform (GCP) and Microsoft Azure have their own Velero object store plugins. 4.6.1.1.2. Unsupported backup storage providers The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat: Oracle Cloud DigitalOcean NooBaa, unless installed using Multicloud Object Gateway (MCG) Tencent Cloud Quobyte Cloudian HyperStore Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . 4.6.1.1.3. Backup storage providers with known limitations The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set: Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore. 4.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store. Warning Failure to configure MCG as an external object store might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud . Additional resources Overview of backup and snapshot locations in the Velero documentation 4.6.1.3. About OADP update channels When you install an OADP Operator, you choose an update channel . This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time. The following update channels are available: The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for OADP.v1.1.z and older versions from OADP.v1.0.z . The stable-1.0 channel is deprecated and is not supported. The stable-1.1 channel is deprecated and is not supported. The stable-1.2 channel is deprecated and is not supported. The stable-1.3 channel contains OADP.v1.3.z , the most recent OADP 1.3 ClusterServiceVersion . The stable-1.4 channel contains OADP.v1.4.z , the most recent OADP 1.4 ClusterServiceVersion . For more information, see OpenShift Operator Life Cycles . Which update channel is right for you? The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from OADP.v1.1.z . Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z. When must you switch update channels? If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z. If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z. If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z. Note You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it. 4.6.1.4. Installation of OADP on multiple namespaces You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI). You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements: All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported. Each individual deployment of OADP must have a unique set of credentials and at least one BackupStorageLocation configuration. You can also use multiple BackupStorageLocation configurations within the same namespace. By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently. Additional resources Cluster service version 4.6.1.5. Velero CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources. 4.6.1.5.1. CPU and memory requirement for configurations Configuration types [1] Average usage [2] Large usage resourceTimeouts CSI Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi N/A Restic [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi 900m [5] Data Mover N/A N/A 10m - average usage 60m - large usage Average usage - use these settings for most usage situations. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations. Increasing the CPU has a significant impact on improving backup and restore times. Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m. Note The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above. 4.6.1.5.2. NodeAgent CPU for large usage Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP). Important It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia's aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. You can set these limits in Ceph MDS pods by following the procedure in Changing the CPU and memory resources on the rook-ceph pods . You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: resources: mds: limits: cpu: "3" memory: 128Gi requests: cpu: "3" memory: 8Gi 4.6.2. Installing the OADP Operator You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.14 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.14 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.6.2.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 1.3.0 1.12 4.10-4.15 1.3.1 1.12 4.10-4.15 1.3.2 1.12 4.10-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 4.6.3. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.3.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud Amazon Simple Storage Service (Amazon S3) is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web. You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service. You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources. AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following: You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region. If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls. IIn AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions, arn:aws-us-gov . If you need to specify the US-West or US-East region, use one the following ARNs: For US-West, use us-gov-west-1 . For US-East, use us-gov-east-1 . For all other standard regions, ARNs begin with: arn:aws . In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas . If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com . To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US) . 4.6.3.2. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.6.3.3. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.3.3.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.3.3.2. Creating profiles for different credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.6.3.3.3. Configuring the backup storage location using AWS You can configure the AWS backup storage location (BSL) as shown in the following example procedure. Prerequisites You have created an object storage bucket using AWS. You have installed the OADP Operator. Procedure Configure the BSL custom resource (CR) with values as applicable to your use case. Backup storage location apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: "true" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: "50..c-4da1-419f-a16e-ei...49f" 12 customerKeyEncryptionFile: "/credentials/customer-key" 13 signatureVersion: "1" 14 profile: "default" 15 insecureSkipTLSVerify: "true" 16 enableSharedConfig: "true" 17 tagging: "" 18 checksumAlgorithm: "CRC32" 19 1 1 The name of the object store plugin. In this example, the plugin is aws . This field is required. 2 The name of the bucket in which to store backups. This field is required. 3 The prefix within the bucket in which to store backups. This field is optional. 4 The credentials for the backup storage location. You can set custom credentials. If custom credentials are not set, the default credentials' secret is used. 5 The key within the secret credentials' data. 6 The name of the secret containing the credentials. 7 The AWS region where the bucket is located. Optional if s3ForcePathStyle is false. 8 A boolean flag to decide whether to use path-style addressing instead of virtual hosted bucket addressing. Set to true if using a storage service such as MinIO or NooBaa. This is an optional field. The default value is false . 9 You can specify the AWS S3 URL here for explicitness. This field is primarily for storage services such as MinIO or NooBaa. This is an optional field. 10 This field is primarily used for storage services such as MinIO or NooBaa. This is an optional field. 11 The name of the server-side encryption algorithm to use for uploading objects, for example, AES256 . This is an optional field. 12 Specify an AWS KMS key ID. You can format, as shown in the example, as an alias, such as alias/<KMS-key-alias-name> , or the full ARN to enable encryption of the backups stored in S3. Note that kmsKeyId cannot be used in with customerKeyEncryptionFile . This is an optional field. 13 Specify the file that has the SSE-C customer key to enable customer key encryption of the backups stored in S3. The file must contain a 32-byte string. The customerKeyEncryptionFile field points to a mounted secret within the velero container. Add the following key-value pair to the velero cloud-credentials secret: customer-key: <your_b64_encoded_32byte_string> . Note that the customerKeyEncryptionFile field cannot be used with the kmsKeyId field. The default value is an empty string ( "" ), which means SSE-C is disabled. This is an optional field. 14 The version of the signature algorithm used to create signed URLs. You use signed URLs to download the backups, or fetch the logs. Valid values are 1 and 4 . The default version is 4 . This is an optional field. 15 The name of the AWS profile in the credentials file. The default value is default . This is an optional field. 16 Set the insecureSkipTLSVerify field to true if you do not want to verify the TLS certificate when connecting to the object store, for example, for self-signed certificates with MinIO. Setting to true is susceptible to man-in-the-middle attacks and is not recommended for production workloads. The default value is false . This is an optional field. 17 Set the enableSharedConfig field to true if you want to load the credentials file as a shared config file. The default value is false . This is an optional field. 18 Specify the tags to annotate the AWS S3 objects. Specify the tags in key-value pairs. The default value is an empty string ( "" ). This is an optional field. 19 Specify the checksum algorithm to use for uploading objects to S3. The supported values are: CRC32 , CRC32C , SHA1 , and SHA256 . If you set the field as an empty string ( "" ), the checksum check will be skipped. The default value is CRC32 . This is an optional field. 4.6.3.3.4. Creating an OADP SSE-C encryption key for additional data security Amazon Web Services (AWS) S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption. The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption. You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed. Warning Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key. Prerequisites To make OADP mount a secret that contains your SSE-C key to the Velero pod at /credentials , use the following default secret name for AWS: cloud-credentials , and leave at least one of the following labels empty: dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credential This is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971 . Note The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting. If you need the backup location to have credentials with a different name than cloud-credentials , you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the example does not contain a credential name, the snapshot location will use cloud-credentials as its secret for taking snapshots. Example snapshot location in a DPA without credentials specified snapshotLocations: - velero: config: profile: default region: <region> provider: aws # ... Procedure Create an SSE-C encryption key: Generate a random number and save it as a file named sse.key by running the following command: USD dd if=/dev/urandom bs=1 count=32 > sse.key Encode the sse.key by using Base64 and save the result as a file named sse_encoded.key by running the following command: USD cat sse.key | base64 > sse_encoded.key Link the file named sse_encoded.key to a new file named customer-key by running the following command: USD ln -s sse_encoded.key customer-key Create an OpenShift Container Platform secret: If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command: USD oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key If you are updating an existing installation, edit the values of the cloud-credential secret block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret # ... Edit the value of the customerKeyEncryptionFile attribute in the backupLocations block of the DataProtectionApplication CR manifest, as in the following example: spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default # ... Warning You must restart the Velero pod to remount the secret credentials properly on an existing installation. The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key. Verification To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it. Create a test file by running the following command: USD echo "encrypt me please" > test.txt Upload the test file by running the following command: USD aws s3api put-object \ --bucket <bucket> \ --key test.txt \ --body test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 Try to download the file. In either the Amazon web console or the terminal, run the following command: USD s3cmd get s3://<bucket>/test.txt test.txt The download fails because the file is encrypted with an additional key. Download the file with the additional encryption key by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ downloaded.txt Read the file contents by running the following command: USD cat downloaded.txt Example output encrypt me please Additional resources You can also download the file with the additional encryption key backed up with Velcro by running a different command. See Downloading a file with an SSE-C encryption key for files backed up by Velero . 4.6.3.3.4.1. Downloading a file with an SSE-C encryption key for files backed up by Velero When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velcro. Procedure Download the file with the additional encryption key for files backed up by Velero by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ --debug \ velero_download.tar.gz 4.6.3.4. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.3.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.3.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.3.4.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.3.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: "default" s3ForcePathStyle: "true" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: "default" credential: key: cloud name: cloud-credentials 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 9 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 10 Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage. 11 Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage. 12 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 13 Specify a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs. 14 The snapshot location must be in the same region as the PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the credentials-velero file. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.3.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.3.6. Configuring the backup storage location with a MD5 checksum algorithm You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA. CRC32 CRC32C SHA1 SHA256 Note You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32 . Prerequisites You have installed the OADP Operator. You have configured Amazon S3, or S3-compatible object storage as a backup location. Procedure Configure the BSL in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: "" 1 insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi 1 Specify the checksumAlgorithm . In this example, the checksumAlgorithm field is set to an empty value. You can select an option from the following list: CRC32 , CRC32C , SHA1 , SHA256 . Important If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration. The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method. 4.6.3.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.3.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.3.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.3.9.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.3.9.2. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . 4.6.4. Configuring the OpenShift API for Data Protection with IBM Cloud You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups. 4.6.4.1. Configuring the COS instance You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials. Prerequisites You have an IBM Cloud Platform account. You installed the IBM Cloud CLI . You are logged in to IBM Cloud. Procedure Install the IBM Cloud Object Storage (COS) plugin by running the following command: USD ibmcloud plugin install cos -f Set a bucket name by running the following command: USD BUCKET=<bucket_name> Set a bucket region by running the following command: USD REGION=<bucket_region> 1 1 Specify the bucket region, for example, eu-gb . Create a resource group by running the following command: USD ibmcloud resource group-create <resource_group_name> Set the target resource group by running the following command: USD ibmcloud target -g <resource_group_name> Verify that the target resource group is correctly set by running the following command: USD ibmcloud target Example output API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default In the example output, the resource group is set to Default . Set a resource group name by running the following command: USD RESOURCE_GROUP=<resource_group> 1 1 Specify the resource group name, for example, "default" . Create an IBM Cloud service-instance resource by running the following command: USD ibmcloud resource service-instance-create \ <service_instance_name> \ 1 <service_name> \ 2 <service_plan> \ 3 <region_name> 4 1 Specify a name for the service-instance resource. 2 Specify the service name. Alternatively, you can specify a service ID. 3 Specify the service plan for your IBM Cloud account. 4 Specify the region name. Example command USD ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ 1 standard \ global \ -d premium-global-deployment 2 1 The service name is cloud-object-storage . 2 The -d flag specifies the deployment name. Extract the service instance ID by running the following command: USD SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id') Create a COS bucket by running the following command: USD ibmcloud cos bucket-create \// --bucket USDBUCKET \// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \// --region USDREGION Variables such as USDBUCKET , USDSERVICE_INSTANCE_ID , and USDREGION are replaced by the values you set previously. Create HMAC credentials by running the following command. USD ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true} Extract the access key ID and the secret access key from the HMAC credentials and save them in the credentials-velero file. You can use the credentials-velero file to create a secret for the backup storage location. Run the following command: USD cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__ 4.6.4.2. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.4.3. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.4.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5 1 The provider is aws when you use IBM Cloud as a backup storage location. 2 Specify the IBM Cloud Object Storage (COS) bucket name. 3 Specify the COS region name, for example, eu-gb . 4 Specify the S3 URL of the COS bucket. For example, http://s3.eu-gb.cloud-object-storage.appdomain.cloud . Here, eu-gb is the region name. Replace the region name according to your bucket region. 5 Defines the name of the secret you created by using the access key and the secret access key from the HMAC credentials. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.4.5. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.6.4.6. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.4.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.4.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.4.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.4.10. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". 4.6.5. Configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Azure for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.5.1. Configuring Microsoft Azure You configure Microsoft Azure for OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools. This identity is used for access to resources. Create a service principal Sign in using a service principal and password Sign in using a service principal and certificate Manage service principal roles Create an Azure resource using a service principal Reset service principal credentials For more details, see Create an Azure service principal with Azure CLI . 4.6.5.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.5.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-azure . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.5.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" provider: azure 1 Backup location Secret with custom name. 4.6.5.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.5.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.5.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.5.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.5.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify the Azure resource group. 9 Specify the Azure storage account ID. 10 Specify the Azure subscription ID. 11 If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 14 You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.5.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.5.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.5.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.5.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.5.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.6. Configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure GCP for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.6.1. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.6.6.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.6.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-gcp . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.6.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.6.6.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.6.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.6.4. Google workload identity federation cloud authentication Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed. With Google's workload identity federation, you can use Identity and Access Management (IAM) to offer IAM roles, including the ability to impersonate service accounts, to external identities. This eliminates the maintenance and security risks associated with service account keys. Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application. Note Google workload identity federation is available for OADP 1.3.x and later. When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots. OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues . If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application . Prerequisites You have installed a cluster in manual mode with GCP Workload Identity configured . You have access to the Cloud Credential Operator utility ( ccoctl ) and to the associated workload identity pool. Procedure Create an oadp-credrequest directory by running the following command: USD mkdir -p oadp-credrequest Create a CredentialsRequest.yaml file as following: echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml Use the ccoctl utility to process the CredentialsRequest objects in the oadp-credrequest directory by running the following command: USD ccoctl gcp create-service-accounts \ --name=<name> \ --project=<gcp_project_id> \ --credentials-requests-dir=oadp-credrequest \ --workload-identity-pool=<pool_id> \ --workload-identity-provider=<provider_id> The manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml file is now available to use in the following steps. Create a namespace by running the following command: USD oc create namespace <OPERATOR_INSTALL_NS> Apply the credentials to the namespace by running the following command: USD oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml 4.6.6.4.1. Google workload identity federation known issues Volume Snapshot Location (VSL) backups finish with a PartiallyFailed phase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups. 4.6.6.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Secret key that contains credentials. For Google workload identity federation cloud authentication use service_account.json . 9 Secret name that contains credentials. If you do not specify this value, the default name, cloud-credentials-gcp , is used. 10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 11 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 12 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 13 The snapshot location must be in the same region as the PVs. 14 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. 15 Google workload identity federation supports internal image backup. Set this field to false if you do not want to use image backup. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.6.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.6.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.6.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.6.7.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.6.7.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.7. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Multicloud Object Gateway as a backup location. MCG is a component of OpenShift Data Foundation. You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.7.1. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.6.7.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.7.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.7.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: profile: "default" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Specify the region, following the naming convention of the documentation of your object storage server. 2 Backup location Secret with custom name. 4.6.7.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.7.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.7.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.7.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.7.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: "default" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 The openshift plugin is mandatory. 4 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 5 The administrative agent that routes the administrative requests to servers. 6 Set this value to true if you want to enable nodeAgent and perform File System Backup. 7 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 8 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 9 Specify the region, following the naming convention of the documentation of your object storage server. 10 Specify the URL of the S3 endpoint. 11 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.7.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.7.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.7.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.7.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.7.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Performance tuning guide for Multicloud Object Gateway . Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.8. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location. Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.8.1. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. Additional resources Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.1.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials , unless your backup storage provider has a default plugin, such as aws , azure , or gcp . In that case, the default name is specified in the provider-specific OADP installation procedure. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.8.1.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.8.2. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.8.2.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.8.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to Red Hat OpenShift Data Foundation (ODF). If working with ODF, consult the appropriate tuning guides for official recommendations. 4.6.8.2.1.1.1. CPU and memory requirement for configurations Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested: Configuration types Request Max limit CPU Request changed to 3 Max limit to 3 Memory Request changed to 8 Gi Max limit to 128 Gi 4.6.8.2.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.8.2.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.8.3. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 Optional: The kubevirt plugin is used with OpenShift Virtualization. 4 Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . 4.6.8.4. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.8.5. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.8.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.8.5.2. Creating an Object Bucket Claim for disaster recovery on OpenShift Data Foundation If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console. Warning Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.5.3. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.8.5.4. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.9. Configuring the OpenShift API for Data Protection with OpenShift Virtualization You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application. Back up and restore virtual machines by using the OpenShift API for Data Protection . Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backups and restores For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.9.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 4.6.9.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Verify that the PHASE is in Available . Warning If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots. 4.6.9.3. Backing up a single VM If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label. Prerequisites You have installed the OADP Operator. You have multiple VMs running in a namespace. You have added the kubevirt plugin in the DataProtectionApplication (DPA) custom resource (CR). You have configured the BackupStorageLocation CR in the DataProtectionApplication CR and BackupStorageLocation is available. Procedure Configure the Backup CR as shown in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3 1 Specify the name of the namespace where you have created the VMs. 2 Specify the VM name that needs to be backed up. 3 Specify the name of the BackupStorageLocation CR. To create a Backup CR, run the following command: USD oc apply -f <backup_cr_file_name> 1 1 Specify the name of the Backup CR file. 4.6.9.4. Restoring a single VM After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM. Prerequisites You have installed the OADP Operator. You have backed up a single VM by using the label selector. Procedure Configure the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true 1 Specifies the name of the backup of a single VM. To restore the single VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.5. Restoring a single VM from a backup of multiple VMs If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM. Prerequisites You have installed the OADP Operator. You have labeled the VMs that need to be backed up. You have a backup of multiple VMs. Procedure Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command: USD oc label vm <vm_name> app=<vm_name> -n openshift-adp Configure the label selectors in the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2 1 Specify the UID of DataVolume of the VM that you want to restore. For example, b6... 53a-ddd7-4d9d-9407-a0c... e5 . 2 Specify the name of the VM that you want to restore. For example, test-vm . To restore a VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.9.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.9.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.9.8. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.4. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.5. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . Important Red Hat only supports the combination of OADP versions 1.3.0 and later, and OpenShift Virtualization versions 4.14 and later. OADP versions before 1.3.0 are not supported for back up and restore of OpenShift Virtualization. 4.6.10. Configuring the OpenShift API for Data Protection (OADP) with more than one Backup Storage Location You can configure one or more backup storage locations (BSLs) in the Data Protection Application (DPA). You can also select the location to store the backup in when you create the backup. With this configuration, you can store your backups in the following ways: To different regions To a different storage provider OADP supports multiple credentials for configuring more than one BSL, so that you can specify the credentials to use with any BSL. 4.6.10.1. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.10.2. OADP use case for two BSLs In this use case, you configure the DPA with two storage locations by using two cloud credentials. You back up an application with a database by using the default BSL. OADP stores the backup resources in the default BSL. You then backup the application again by using the second BSL. Prerequisites You must install the OADP Operator. You must configure two backup storage locations: AWS S3 and Multicloud Object Gateway (MCG). You must have an application with a database deployed on a Red Hat OpenShift cluster. Procedure Create the first Secret for the AWS S3 storage provider with the default name by running the following command: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1 1 Specify the name of the cloud credentials file for AWS S3. Create the second Secret for MCG with a custom name by running the following command: USD oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1 1 Specify the name of the cloud credentials file for MCG. Note the name of the mcg-secret custom secret. Configure the DPA with the two BSLs as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: "true" profile: noobaa region: <region_name> 3 s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws 1 Specify the AWS region for the bucket. 2 Specify the AWS S3 bucket name. 3 Specify the region, following the naming convention of the documentation of MCG. 4 Specify the URL of the S3 endpoint for MCG. 5 Specify the name of the custom secret for MCG storage. 6 Specify the MCG bucket name. Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Verify that the BSLs are available by running the following command: USD oc get bsl Example output NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s Create a backup CR with the default BSL. Note In the following example, the storageLocation field is not specified in the backup CR. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the default BSL by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Create a backup CR by using MCG as the BSL. In the following example, note that the second storageLocation value is specified at the time of backup CR creation. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. 2 Specify the second storage location. Create a second backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the storage location as MCG by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Additional resources Creating profiles for different credentials 4.6.11. Configuring the OpenShift API for Data Protection (OADP) with more than one Volume Snapshot Location You can configure one or more Volume Snapshot Locations (VSLs) to store the snapshots in different cloud provider regions. 4.6.11.1. Configuring the DPA with more than one VSL You configure the DPA with more than one VSL and specify the credentials provided by the cloud provider. Make sure that you configure the snapshot location in the same region as the persistent volumes. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #... 1 Specify the region. The snapshot location must be in the same region as the persistent volumes. 2 Specify the custom credential name. 4.7. Uninstalling OADP 4.7.1. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.8. OADP backing up 4.8.1. Backing up applications Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR . The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots . For more information about CSI volume snapshots, see CSI volume snapshots . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Kopia or Restic. See Backing up applications with File System Backup: Kopia or Restic . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Important The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. 4.8.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks . You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups using Schedule CR . 4.8.1.2. Known issues OpenShift Container Platform 4.14 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases. For more information, see Restic restore partially failing on OCP 4.14 due to changed PSA policy . Additional resources Installing Operators on clusters for administrators Installing Operators in namespaces for non-administrators 4.8.2. Creating a Backup CR You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. 6 Map of {key,value} pairs of backup resources that have all the specified labels. 7 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.8.3. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots . For more information, see Creating a Backup CR . Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: Example configuration file apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3 1 Must be set to true . 2 If you are restoring this volume in another cluster with the same driver, make sure that you set the snapshot.storage.kubernetes.io/is-default-class parameter to false instead of setting it to true . Otherwise, the restore will partially fail. 3 OADP supports the Retain and Delete deletion policy types for CSI and Data Mover backup and restore. steps You can now create a Backup CR. 4.8.4. Backing up applications with File System Backup: Kopia or Restic You can use OADP to back up and restore Kubernetes volumes attached to pods from the file system of the volumes. This process is called File System Backup (FSB) or Pod Volume Backup (PVB). It is accomplished by using modules from the open source backup tools Restic or Kopia. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using FSB. Note Restic is installed by the OADP Operator by default. If you prefer, you can install Kopia instead. FSB integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volumes. This integration is an additional capability of OADP and is not a replacement for existing functionality. You back up Kubernetes resources, internal images, and persistent volumes with Kopia or Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Note In OADP version 1.3 and later, you can use either Kopia or Restic for backing up applications. For the Built-in DataMover, you must use Kopia. In OADP version 1.2 and earlier, you can only use Restic for backing up applications. Important FSB does not support backing up hostPath volumes. For more information, see FSB limitations . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default nodeAgent installation by setting spec.configuration.nodeAgent.enable to false in the DataProtectionApplication CR. You must select Kopia or Restic as the uploader by setting spec.configuration.nodeAgent.uploaderType to kopia or restic in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Create the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1 ... 1 In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true setting within the spec block. In OADP version 1.1, add defaultVolumesToRestic: true . 4.8.5. Creating backup hooks When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up. The commands can be configured to performed before any custom action processing ( Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up ( Post hooks). You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all objects. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entry point for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.8.6. Scheduling backups using Schedule CR The schedule operation allows you to create a backup of your data at a particular time, specified by a Cron expression. You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF Note To schedule a backup at specific intervals, enter the <duration_in_minutes> in the following format: schedule: "*/10 * * * *" Enter the minutes value between quotation marks ( " " ). 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true key-value pair to your configuration when performing backups of volumes with Restic. In OADP version 1.1, add the defaultVolumesToRestic: true key-value pair when you back up volumes with Restic. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.8.7. Deleting backups You can delete a backup by creating the DeleteBackupRequest custom resource (CR) or by running the velero backup delete command as explained in the following procedures. The volume backup artifacts are deleted at different times depending on the backup method: Restic: The artifacts are deleted in the full maintenance cycle, after the backup is deleted. Container Storage Interface (CSI): The artifacts are deleted immediately when the backup is deleted. Kopia: The artifacts are deleted after three full maintenance cycles of the Kopia repository, after the backup is deleted. 4.8.7.1. Deleting a backup by creating a DeleteBackupRequest CR You can delete a backup by creating a DeleteBackupRequest custom resource (CR). Prerequisites You have run a backup of your application. Procedure Create a DeleteBackupRequest CR manifest file: apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1 1 Specify the name of the backup. Apply the DeleteBackupRequest CR to delete the backup: USD oc apply -f <deletebackuprequest_cr_filename> 4.8.7.2. Deleting a backup by using the Velero CLI You can delete a backup by using the Velero CLI. Prerequisites You have run a backup of your application. You downloaded the Velero CLI and can access the Velero binary in your cluster. Procedure To delete the backup, run the following Velero command: USD velero backup delete <backup_name> -n openshift-adp 1 1 Specify the name of the backup. 4.8.7.3. About Kopia repository maintenance There are two types of Kopia repository maintenance: Quick maintenance Runs every hour to keep the number of index blobs (n) low. A high number of indexes negatively affects the performance of Kopia operations. Does not delete any metadata from the repository without ensuring that another copy of the same metadata exists. Full maintenance Runs every 24 hours to perform garbage collection of repository contents that are no longer needed. snapshot-gc , a full maintenance task, finds all files and directory listings that are no longer accessible from snapshot manifests and marks them as deleted. A full maintenance is a resource-costly operation, as it requires scanning all directories in all snapshots that are active in the cluster. 4.8.7.3.1. Kopia maintenance in OADP The repo-maintain-job jobs are executed in the namespace where OADP is installed, as shown in the following example: pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m You can check the logs of the repo-maintain-job for more details about the cleanup and the removal of artifacts in the backup object storage. You can find a note, as shown in the following example, in the repo-maintain-job when the full cycle maintenance is due: not due for full maintenance cycle until 2024-00-00 18:29:4 Important Three successful executions of a full maintenance cycle are required for the objects to be deleted from the backup object storage. This means you can expect up to 72 hours for all the artifacts in the backup object storage to be deleted. 4.8.7.4. Deleting a backup repository After you delete the backup, and after the Kopia repository maintenance cycles to delete the related artifacts are complete, the backup is no longer referenced by any metadata or manifest objects. You can then delete the backuprepository custom resource (CR) to complete the backup deletion process. Prerequisites You have deleted the backup of your application. You have waited up to 72 hours after the backup is deleted. This time frame allows Kopia to run the repository maintenance cycles. Procedure To get the name of the backup repository CR for a backup, run the following command: USD oc get backuprepositories.velero.io -n openshift-adp To delete the backup repository CR, run the following command: USD oc delete backuprepository <backup_repository_name> -n openshift-adp 1 1 Specify the name of the backup repository from the earlier step. 4.8.8. About Kopia Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice. Kopia supports network and local storage locations, and many cloud or remote storage locations, including: Amazon S3 and any cloud storage that is compatible with S3 Azure Blob Storage Google Cloud Storage platform Kopia uses content-addressable storage for snapshots: Snapshots are always incremental; data that is already included in snapshots is not re-uploaded to the repository. A file is only uploaded to the repository again if it is modified. Stored data is deduplicated; if multiple copies of the same file exist, only one of them is stored. If files are moved or renamed, Kopia can recognize that they have the same content and does not upload them again. 4.8.8.1. OADP integration with Kopia OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia . If you do not specify an uploaderType , OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository. The following example shows a DataProtectionApplication CR configured for using Kopia: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... 4.9. OADP restoring 4.9.1. Restoring applications You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR . You can create restore hooks to run commands in a container in a pod by editing the Restore CR. See Creating restore hooks . 4.9.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. 4.9.1.2. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you restore DeploymentConfig with volumes or if you use post-restore hooks, run the dc-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh -> dc-post-restore.sh Note During the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow the restore and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales any DeploymentConfig objects back up to the appropriate number of replicas. Example 4.1. dc-restic-post-restore.sh dc-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo "restore: USD1" label=USD(label_name USD1) echo "label: USDlabel" echo Deleting disconnected restore pods oc delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done 4.9.1.3. Creating restore hooks You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum length of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . Important During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. This happens because, during the restore operation, OpenShift controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB and the post restore hook. For more information about image stream trigger, see "Triggering updates on image stream changes". The workaround for this behavior is a two-step restore process: First, perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps After the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps Additional resources Triggering updates on image stream changes 4.10. OADP and ROSA 4.10.1. Backing up applications on ROSA clusters using OADP You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to back up and restore application data. ROSA is a fully-managed, turnkey application platform that allows you to deliver value to your customers by building and deploying applications. ROSA provides seamless integration with a wide range of Amazon Web Services (AWS) compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivery of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After you create your clusters, you can operate your clusters with the OpenShift Container Platform web console or through Red Hat OpenShift Cluster Manager . You can also use ROSA with OpenShift APIs and command-line interface (CLI) tools. For additional information about ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough . Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials Install the OADP Operator and give it an IAM role 4.10.1.1. Preparing AWS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Procedure Create the following environment variables by running the following commands: Important Change the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing. USD export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} echo "Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" 1 Replace my-cluster with your ROSA cluster name. On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following command: USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) 1 1 Replace RosaOadp with your policy name. Enter the following command to create the policy JSON file and then create the policy in ROSA: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUploads", "s3:ListMultipartUploadParts", "s3:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name "RosaOadpVer1" \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \ --output text) fi 1 SCRATCH is a name for a temporary directory created for the environment variables. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version":2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create the role by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" \ --policy-arn USD{POLICY_ARN} 4.10.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. OpenShift Container Platform (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on ROSA with AWS STS. Important Restic is unsupported. Kopia file system backup (FSB) is supported when backing up file systems that do not have Container Storage Interface (CSI) snapshotting support. Example file systems include the following: Amazon Elastic File System (EFS) Network File System (NFS) emptyDir volumes Local volumes For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an Amazon ROSA cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform ROSA cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF 1 Replace <aws_region> with the AWS region to use for the STS endpoint. Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important The enable parameter of restic is set to false in this configuration, because OADP does not support Restic in ROSA environments. If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. 4.10.1.3. Updating the IAM role ARN in the OADP Operator subscription While installing the OADP Operator on a ROSA Security Token Service (STS) cluster, if you provide an incorrect IAM role Amazon Resource Name (ARN), the openshift-adp-controller pod gives an error. The credential requests that are generated contain the wrong IAM role ARN. To update the credential requests object with the correct IAM role ARN, you can edit the OADP Operator subscription and patch the IAM role ARN with the correct value. By editing the OADP Operator subscription, you do not have to uninstall and reinstall OADP to update the IAM role ARN. Prerequisites You have a Red Hat OpenShift Service on AWS STS cluster with the required access and tokens. You have installed OADP on the ROSA STS cluster. Procedure To verify that the OADP subscription has the wrong IAM role ARN environment variable set, run the following command: USD oc get sub -o yaml redhat-oadp-operator Example subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: "2025-01-15T07:18:31Z" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: "" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: "77363" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2 1 Verify the value of ROLEARN you want to update. Update the ROLEARN field of the subscription with the correct role ARN by running the following command: USD oc patch subscription redhat-oadp-operator -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "<role_arn>"}]}}}' --type='merge' where: <role_arn> Specifies the IAM role ARN to be updated. For example, arn:aws:iam::160... ..6956:role/oadprosa... ..8wlf . Verify that the secret object is updated with correct role ARN value by running the following command: USD oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d Example output [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token Configure the DataProtectionApplication custom resource (CR) manifest file as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift 1 Specify the CloudStorage CR. Create the DataProtectionApplication CR by running the following command: USD oc create -f <dpa_manifest_file> Verify that the DataProtectionApplication CR is reconciled and the status is set to "True" by running the following command: USD oc get dpa -n openshift-adp -o yaml Example DataProtectionApplication apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... status: conditions: - lastTransitionTime: "2023-07-31T04:48:12Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled Verify that the BackupStorageLocation CR is in an available state by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example BackupStorageLocation NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true Additional resources Installing from OperatorHub using the web console . Backing up applications 4.10.1.4. Example: Backing up workload on OADP ROSA STS, with an optional cleanup 4.10.1.4.1. Performing a backup with OADP and ROSA STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) STS. Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup is completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.10.1.4.2. Cleaning up a cluster after a backup with OADP and ROSA STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Warning If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3 run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.11. OADP and AWS STS 4.11.1. Backing up applications on AWS STS using OADP You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. You can install OADP on an AWS Security Token Service (STS) (AWS STS) cluster manually. Amazon AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. You use STS to provide trusted users with temporary access to resources via API calls, your AWS console, or the AWS command line interface (CLI). Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials. Install the OADP Operator and give it an IAM role. 4.11.1.1. Preparing AWS STS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Prepare the AWS credentials by using the following procedure. Procedure Define the cluster_name environment variable by running the following command: USD export CLUSTER_NAME= <AWS_cluster_name> 1 1 The variable can be set to any value. Retrieve all of the details of the cluster such as the AWS_ACCOUNT_ID, OIDC_ENDPOINT by running the following command: USD export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" Create a temporary directory to store all of the files by running the following command: USD export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} Display all of the gathered details by running the following command: USD echo "Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following commands: USD export POLICY_NAME="OadpVer1" 1 1 The variable can be set to any value. USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}" --output text) Enter the following command to create the policy JSON file and then create the policy: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp \ --output text) 1 fi 1 SCRATCH is a name for a temporary directory created for storing the files. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create an IAM role trust policy for the cluster by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn USD{POLICY_ARN} 4.11.1.1.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.11.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. This document describes how to install OpenShift API for Data Protection (OADP) on an AWS STS cluster manually. Important Restic and Kopia are not supported in the OADP AWS STS environment. Verify that the Restic and Kopia node agent is disabled. For backing up volumes, OADP on AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an AWS cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in AWS STS clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform AWS STS cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF 1 Set this field to false if you do not want to use image backup. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. Additional resources Installing from OperatorHub using the web console Backing up applications 4.11.1.3. Backing up workload on OADP AWS STS, with an optional cleanup 4.11.1.3.1. Performing a backup with OADP and AWS STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) (AWS STS). Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup has completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.11.1.3.2. Cleaning up a cluster after a backup with OADP and AWS STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Important If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator by running the following command: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3, run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.12. OADP and 3scale 4.12.1. Backing up and restoring 3scale by using OADP With Red Hat 3scale API Management (APIM), you can manage your APIs for internal or external users. Share, secure, distribute, control, and monetize your APIs on an infrastructure platform built with performance, customer control, and future growth in mind. You can deploy 3scale components on-premise, in the cloud, as a managed service, or in any combination based on your requirement. Note In this example, the non-service affecting approach is used to back up and restore 3scale on-cluster storage by using the OpenShift API for Data Protection (OADP) Operator. Additionally, ensure that you are restoring 3scale on the same cluster where it was backed up from. If you want to restore 3scale on a different cluster, ensure that both clusters are using the same custom domain. Prerequisites You installed and configured Red Hat 3scale. For more information, see Red Hat 3scale API Management . 4.12.1.1. Creating the Data Protection Application You can create a Data Protection Application (DPA) custom resource (CR) for 3scale. For more information on DPA, see "Installing the Data Protection Application". Procedure Create a YAML file with the following configuration: Example dpa.yaml file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: "default" s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 1 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 2 Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes. 3 Specify a region for backup storage location. 4 Specify the URL of the object store that you are using to store backups. Create the DPA CR by running the following command: USD oc create -f dpa.yaml steps Back up the 3scale Operator. Additional resources Installing the Data Protection Application 4.12.1.2. Backing up the 3scale Operator You can back up the Operator resources, and Secret and APIManager custom resources (CR). For more information, see "Creating a Backup CR". Prerequisites You created the Data Protection Application (DPA). Procedure Back up the Operator resources, such as operatorgroup , namespaces , and subscriptions , by creating a YAML file with the following configuration: Example backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s 1 Namespace where the 3scale Operator is installed. Note You can also back up and restore ReplicationControllers , Deployment , and Pod objects to ensure that all manually set environments are backed up and restored. This does not affect the flow of restoration. Create a backup CR by running the following command: USD oc create -f backup.yaml Back up the Secret CR by creating a YAML file with the following configuration: Example backup-secret.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Create the Secret CR by running the following command: USD oc create -f backup-secret.yaml Back up the APIManager CR by creating a YAML file with the following configuration: Example backup-apimanager.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1 Create the APIManager CR by running the following command: USD oc create -f backup-apimanager.yaml steps Back up the mysql database. Additional resources Creating a Backup CR 4.12.1.3. Backing up the mysql database You can back up the mysql database by creating and attaching a persistent volume claim (PVC) to include the dumped data in the specified path. Prerequisites You have backed up the 3scale operator. Procedure Create a YAML file with the following configuration for adding an additional PVC: Example ts_pvc.yaml file kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem Create the additional PVC by running the following command: USD oc create -f ts_pvc.yml Attach the PVC to the system database pod by editing the system database deployment to use the mysql dump: USD oc edit deployment system-mysql -n threescale volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra ... serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1 ... 1 The PVC that contains the dumped data. Create a YAML file with following configuration to back up the mysql database: Example mysql.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s 1 A directory where the data is backed up. 2 Resources to back up. Back up the mysql database by running the following command: USD oc create -f mysql.yaml Verification Verify that the mysql backup is completed by running the following command: USD oc get backups.velero.io mysql-backup Example output NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s steps Back up the back-end Redis database. 4.12.1.4. Backing up the back-end Redis database You can back up the Redis database by adding the required annotations and by listing which resources to back up using the includedResources parameter. Prerequisites You backed up the 3scale Operator. You backed up the mysql database. The Redis queues have been drained before performing the backup. Procedure Edit the annotations on the backend-redis deployment by running the following command: USD oc edit deployment backend-redis -n threescale Add the following annotations: annotations: post.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 100"] pre.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 0"] Create a YAML file with the following configuration to back up the Redis database: Example redis-backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Back up the Redis database by running the following command: USD oc get backups.velero.io redis-backup -o yaml Verification Verify that the Redis backup is completed by running the following command:: USD oc get backups.velero.io steps Restore the Secrets and APIManager CRs. 4.12.1.5. Restoring the secrets and APIManager You can restore the Secrets and APIManager by using the following procedure. Prerequisites You backed up the 3scale Operator. You backed up mysql and Redis databases. You are restoring the database on the same cluster, where it was backed up. If it is on a different cluster, install and configure OADP with nodeAgent enabled on the destination cluster as it was on the source cluster. Procedure Delete the 3scale Operator custom resource definitions (CRDs) along with the threescale namespace by running the following command: USD oc delete project threescale Example output "threescale" project deleted successfully Create a YAML file with the following configuration to restore the 3scale Operator: Example restore.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the 3scale Operator by running the following command: USD oc create -f restore.yaml Manually create the s3-credentials Secret object by running the following command: USD oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF 1 Replace <ID_123456> with your AWS credentials ID. 2 Replace <ID_98765544> with your AWS credentials KEY. 3 Replace <mybucket.example.com> with your target bucket name. 4 Replace <us-east-1> with the AWS region of your bucket. Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Create a YAML file with the following configuration to restore the Secrets: Example restore-secret.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the Secrets by running the following command: USD oc create -f restore-secrets.yaml Create a YAML file with the following configuration to restore APIManager: Example restore-apimanager.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s 1 The resources that you do not want to restore. Restore the APIManager by running the following command: USD oc create -f restore-apimanager.yaml Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale steps Restore the mysql database. 4.12.1.6. Restoring the mysql database Restoring the mysql database re-creates the following resources: The Pod , ReplicationController , and Deployment objects. The additional persistent volumes (PVs) and associated persistent volume claims (PVCs). The mysql dump, which the example-claim PVC contains. Warning Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted. Prerequisites You restored the Secret and APIManager custom resources (CR). Procedure Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Example output: deployment.apps/threescale-operator-controller-manager-v2 scaled Create the following script to scale down the 3scale operator: USD vi ./scaledowndeployment.sh Example output: for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done Scale down all the deployment 3scale components by running the following script: USD ./scaledowndeployment.sh Example output: deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled Delete the system-mysql Deployment object by running the following command: USD oc delete deployment system-mysql -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "system-mysql" deleted Create the following YAML file to restore the mysql database: Example restore-mysql.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true 1 A path where the data is restored from. Restore the mysql database by running the following command: USD oc create -f restore-mysql.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m Verify that the additional PVC has been restored by running the following command: USD oc get pvc -n threescale Example output: NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m steps Restore the back-end Redis database. 4.12.1.7. Restoring the back-end Redis database You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore. Prerequisites You restored the Secret and APIManager custom resources. You restored the mysql database. Procedure Delete the backend-redis deployment by running the following command: USD oc delete deployment backend-redis -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "backend-redis" deleted Create a YAML file with the following configuration to restore the Redis database: Example restore-backend.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true Restore the Redis database by running the following command: USD oc create -f restore-backend.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m steps Scale the 3scale Operator and deployment. 4.12.1.8. Scaling up the 3scale Operator and deployment You can scale up the 3scale Operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state. Prerequisites Ensure that there are no scaled up deployments or no extra pods running. There might be some system-mysql or backend-redis pods running detached from deployments after restoration, which can be removed after the restoration is successful. Procedure Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale Ensure that the 3scale Operator was deployed by running the following command: USD oc get deployment -n threescale Scale up the deployments by executing the following script: USD ./scaledeployment.sh Get the 3scale-admin route to log in to the 3scale UI by running the following command: USD oc get routes -n threescale Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None In this example, 3scale-admin.apps.custom-cluster-name.openshift.com is the 3scale-admin URL. Use the URL from this output to log in to the 3scale Operator as an administrator. You can verify that the existing data is available before trying to create a backup. 4.13. OADP Data Mover 4.13.1. About the OADP Data Mover OpenShift API for Data Protection (OADP) includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository. OADP supports CSI snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API 4.13.1.1. Data Mover support The OADP built-in Data Mover, which was introduced in OADP 1.3 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. Supported The Data Mover backups taken with OADP 1.3 can be restored using OADP 1.3, 1.4, and later. This is supported. Not supported Backups taken with OADP 1.1 or OADP 1.2 using the Data Mover feature cannot be restored using OADP 1.3 and later. Therefore, it is not supported. OADP 1.1 and OADP 1.2 are no longer supported. The DataMover feature in OADP 1.1 or OADP 1.2 was a Technology Preview and was never supported. DataMover backups taken with OADP 1.1 or OADP 1.2 cannot be restored on later versions of OADP. 4.13.1.2. Enabling the built-in Data Mover To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository. Example DataProtectionApplication manifest apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI # ... 1 The flag to enable the node agent. 2 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. 3 The CSI plugin included in the list of default plugins. 4 In OADP 1.3.1 and later, set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 4.13.1.3. Built-in Data Mover controller and custom resource definitions (CRDs) The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore: DataDownload : Represents a data download of a volume snapshot. The CSI plugin creates one DataDownload object per volume to be restored. The DataDownload CR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. DataUpload : Represents a data upload of a volume snapshot. The CSI plugin creates one DataUpload object per CSI snapshot. The DataUpload CR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. BackupRepository : Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested. 4.13.1.4. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.6. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.7. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . 4.13.2. Backing up and restoring CSI snapshots data movement You can back up and restore persistent volumes by using the OADP 1.3 Data Mover. 4.13.2.1. Backing up persistent volumes with CSI snapshots You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. Prerequisites You have access to the cluster with the cluster-admin role. You have installed the OADP Operator. You have included the CSI plugin and enabled the node agent in the DataProtectionApplication custom resource (CR). You have an application with persistent volumes running in a separate namespace. You have added the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR. Procedure Create a YAML file for the Backup object, as in the following example: Example Backup CR kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1 # ... 1 Set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 2 Set to true to enable movement of CSI snapshots to remote object storage. 3 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Note If you format the volume by using XFS filesystem and the volume is at 100% capacity, the backup fails with a no space left on device error. For example: Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on device In this scenario, consider resizing the volume or using a different filesystem type, for example, ext4 , so that the backup completes successfully. Apply the manifest: USD oc create -f backup.yaml A DataUpload CR is created after the snapshot creation is complete. Verification Verify that the snapshot data is successfully transferred to the remote object store by monitoring the status.phase field of the DataUpload CR. Possible values are In Progress , Completed , Failed , or Canceled . The object store is configured in the backupLocations stanza of the DataProtectionApplication CR. Run the following command to get a list of all DataUpload objects: USD oc get datauploads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal Check the value of the status.phase field of the specific DataUpload object by running the following command: USD oc get datauploads <dataupload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: "" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: "2023-11-02T16:57:02Z" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: "2023-11-02T16:56:22Z" 1 Indicates that snapshot data is successfully transferred to the remote object store. 4.13.2.2. Restoring CSI volume snapshots You can restore a volume snapshot by creating a Restore CR. Note You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. Prerequisites You have access to the cluster with the cluster-admin role. You have an OADP Backup CR from which to restore the data. Procedure Create a YAML file for the Restore CR, as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup> # ... Apply the manifest: USD oc create -f restore.yaml A DataDownload CR is created when the restore starts. Verification You can monitor the status of the restore process by checking the status.phase field of the DataDownload CR. Possible values are In Progress , Completed , Failed , or Canceled . To get a list of all DataDownload objects, run the following command: USD oc get datadownloads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal Enter the following command to check the value of the status.phase field of the specific DataDownload object: USD oc get datadownloads <datadownload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: "" pvc: mysql status: completionTimestamp: "2023-11-02T17:01:24Z" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: "2023-11-02T17:00:52Z" 1 Indicates that the CSI snapshot data is successfully restored. 4.13.2.3. Deletion policy for OADP 1.3 The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information. 4.13.2.3.1. Deletion policy guidelines for OADP 1.3 Review the following deletion policy guidelines for the OADP 1.3: In OADP 1.3.x, when using any type of backup and restore methods, you can set the deletionPolicy field to Retain or Delete in the VolumeSnapshotClass custom resource (CR). 4.13.3. Overriding Kopia hashing, encryption, and splitter algorithms You can override the default values of Kopia hashing, encryption, and splitter algorithms by using specific environment variables in the Data Protection Application (DPA). 4.13.3.1. Configuring the DPA to override Kopia hashing, encryption, and splitter algorithms You can use an OpenShift API for Data Protection (OADP) option to override the default Kopia algorithms for hashing, encryption, and splitter to improve Kopia performance or to compare performance metrics. You can set the following environment variables in the spec.configuration.velero.podConfig.env section of the DPA: KOPIA_HASHING_ALGORITHM KOPIA_ENCRYPTION_ALGORITHM KOPIA_SPLITTER_ALGORITHM Prerequisites You have installed the OADP Operator. You have created the secret by using the credentials provided by the cloud provider. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the DPA with the environment variables for hashing, encryption, and splitter as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6 1 Enable the nodeAgent . 2 Specify the uploaderType as kopia . 3 Include the csi plugin. 4 Specify a hashing algorithm. For example, BLAKE3-256 . 5 Specify an encryption algorithm. For example, CHACHA20-POLY1305-HMAC-SHA256 . 6 Specify a splitter algorithm. For example, DYNAMIC-8M-RABINKARP . 4.13.3.2. Use case for overriding Kopia hashing, encryption, and splitter algorithms The use case example demonstrates taking a backup of an application by using Kopia environment variables for hashing, encryption, and splitter. You store the backup in an AWS S3 bucket. You then verify the environment variables by connecting to the Kopia repository. Prerequisites You have installed the OADP Operator. You have an AWS S3 bucket configured as the backup storage location. You have created the secret by using the credentials provided by the cloud provider. You have installed the Kopia client. You have an application with persistent volumes running in a separate namespace. Procedure Configure the Data Protection Application (DPA) as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8 1 Specify a name for the DPA. 2 Specify the region for the backup storage location. 3 Specify the name of the default Secret object. 4 Specify the AWS S3 bucket name. 5 Include the csi plugin. 6 Specify the hashing algorithm as BLAKE3-256 . 7 Specify the encryption algorithm as CHACHA20-POLY1305-HMAC-SHA256 . 8 Specify the splitter algorithm as DYNAMIC-8M-RABINKARP . Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Create a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Verification Connect to the Kopia repository by running the following command: USD kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<aws_s3_access_key>" \ 4 --secret-access-key="<aws_s3_secret_access_key>" \ 5 1 Specify the AWS S3 bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the AWS S3 access key. 5 Specify the AWS S3 storage provider secret access key. Note If you are using a storage provider other than AWS S3, you will need to add --endpoint , the bucket endpoint URL parameter, to the command. Verify that Kopia uses the environment variables that are configured in the DPA for the backup by running the following command: USD kopia repository status Example output Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> # ... Storage type: s3 Storage capacity: unbounded Storage config: { "bucket": <bucket_name>, "prefix": "velero/kopia/<application_namespace>/", "endpoint": "s3.amazonaws.com", "accessKeyID": <access_key>, "secretAccessKey": "****************************************", "sessionToken": "" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3 # ... 4.13.3.3. Benchmarking Kopia hashing, encryption, and splitter algorithms You can run Kopia commands to benchmark the hashing, encryption, and splitter algorithms. Based on the benchmarking results, you can select the most suitable algorithm for your workload. In this procedure, you run the Kopia benchmarking commands from a pod on the cluster. The benchmarking results can vary depending on CPU speed, available RAM, disk speed, current I/O load, and so on. Prerequisites You have installed the OADP Operator. You have an application with persistent volumes running in a separate namespace. You have run a backup of the application with Container Storage Interface (CSI) snapshots. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the must-gather pod as shown in the following example. Make sure you are using the oadp-mustgather image for OADP version 1.3 and later. Example pod configuration apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: ["sleep"] args: ["infinity"] Note The Kopia client is available in the oadp-mustgather image. Create the pod by running the following command: USD oc apply -f <pod_config_file_name> 1 1 Specify the name of the YAML file for the pod configuration. Verify that the Security Context Constraints (SCC) on the pod is anyuid , so that Kopia can connect to the repository. USD oc describe pod/oadp-mustgather-pod | grep scc Example output openshift.io/scc: anyuid Connect to the pod via SSH by running the following command: USD oc -n openshift-adp rsh pod/oadp-mustgather-pod Connect to the Kopia repository by running the following command: sh-5.1# kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<access_key>" \ 4 --secret-access-key="<secret_access_key>" \ 5 --endpoint=<bucket_endpoint> \ 6 1 Specify the object storage provider bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the object storage provider access key. 5 Specify the object storage provider secret access key. 6 Specify the bucket endpoint. You do not need to specify the bucket endpoint, if you are using AWS S3 as the storage provider. Note This is an example command. The command can vary based on the object storage provider. To benchmark the hashing algorithm, run the following command: sh-5.1# kopia benchmark hashing Example output Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256 To benchmark the encryption algorithm, run the following command: sh-5.1# kopia benchmark encryption Example output Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256 To benchmark the splitter algorithm, run the following command: sh-5.1# kopia benchmark splitter Example output splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 # ... FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # ... 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 4.14. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs and CR information by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.14.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Velero website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP and OpenShift Container Platform. 4.14.1.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 1.3.0 1.12 4.10-4.15 1.3.1 1.12 4.10-4.15 1.3.2 1.12 4.10-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 4.14.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.14.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use the info logLevel value for most logs. 4.14.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.14.5. Pods crash or restart due to lack of memory or CPU If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources. Additional resources CPU and memory requirements 4.14.5.1. Setting resource requests for a Velero pod You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Velero file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi 1 The resourceAllocations listed are for average usage. 4.14.5.2. Setting resource requests for a Restic pod You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Restic file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi 1 The resourceAllocations listed are for average usage. Important The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations , the default resources specification for a Velero pod or a Restic pod is as follows: requests: cpu: 500m memory: 128Mi 4.14.6. PodVolumeRestore fails to complete when StorageClass is NFS The restore operation fails when there is more than one volume during a NFS restore by using Restic or Kopia . PodVolumeRestore either fails with the following error or keeps trying to restore before finally failing. Error message Velero: pod volume restore failed: data path restore failed: \ Failed to run kopia restore: Failed to copy snapshot data to the target: \ restore error: copy file: error creating file: \ open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: \ no such file or directory Cause The NFS mount path is not unique for the two volumes to restore. As a result, the velero lock files use the same file on the NFS server during the restore, causing the PodVolumeRestore to fail. Solution You can resolve this issue by setting up a unique pathPattern for each volume, while defining the StorageClass for nfs-subdir-external-provisioner in the deploy/class.yaml file. Use the following nfs-subdir-external-provisioner StorageClass example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: "USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}" 1 onDelete: delete 1 Specifies a template for creating a directory path by using PVC metadata such as labels, annotations, name, or namespace. To specify metadata, use USD{.PVC.<metadata>} . For example, to name a folder: <pvc-namespace>-<pvc-name> , use USD{.PVC.namespace}-USD{.PVC.name} as pathPattern . 4.14.7. Issues with Velero and admission webhooks Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources. For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use. 4.14.7.1. Restoring workarounds for Velero backups that use admission webhooks This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks. 4.14.7.1.1. Restoring Knative resources You might encounter problems using Velero to back up Knative resources that use admission webhooks. You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks. Procedure Restore the top level service.serving.knavtive.dev Service resource: USD velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev 4.14.7.1.2. Restoring IBM AppConnect resources If you experience issues when you use Velero to a restore an IBM(R) AppConnect resource that has an admission webhook, you can run the checks in this procedure. Procedure Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster: USD oc get mutatingwebhookconfigurations Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation . Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator. 4.14.7.2. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.14.7.2.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.14.7.2.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.14.7.2.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.14.7.2.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.14.7.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. Additional resources Admission plugins Webhook admission plugins Types of webhook admission plugins 4.14.8. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.14.8.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.14.8.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.14.9. OADP Operator issues The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve. 4.14.9.1. OADP Operator fails silently The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace> , you see that the Operator has a status of Running . In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running. Cause The problem is caused when cloud credentials provide insufficient permissions. Solution Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues. Procedure Run one of the following commands to retrieve a list of BSLs: Using the OpenShift CLI: USD oc get backupstoragelocations.velero.io -A Using the Velero CLI: USD velero backup-location get -n <OADP_Operator_namespace> Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error. USD oc get backupstoragelocations.velero.io -n <namespace> -o yaml Example result apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: "2023-11-03T19:49:04Z" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: "24273698" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: "true" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: "2023-11-10T22:06:46Z" message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54" phase: Unavailable kind: List metadata: resourceVersion: "" 4.14.10. OADP timeouts Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures. Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance. The following are various OADP timeouts, with instructions of how and when to implement these parameters: 4.14.10.1. Restic timeout The spec.configuration.nodeAgent.timeout parameter defines the Restic timeout. The default value is 1h . Use the Restic timeout parameter in the nodeAgent section for the following scenarios: For Restic backups with total PV data usage that is greater than 500GB. If backups are timing out with the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" Procedure Edit the values in the spec.configuration.nodeAgent.timeout block of the DataProtectionApplication custom resource (CR) manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h # ... 4.14.10.2. Velero resource timeout resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m . Use the resourceTimeout for the following scenarios: For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete. A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task. To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia. To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup. Procedure Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m # ... 4.14.10.3. Data Mover timeout timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore . The default value is 10m . Use the Data Mover timeout for the following scenarios: If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes. For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h . With the VolumeSnapshotMover (VSM) plugin. Only with OADP 1.1.x. Procedure Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m # ... 4.14.10.4. CSI snapshot timeout CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse , before returning error as timeout. The default value is 10m . Use the CSISnapshotTimeout for the following scenarios: With the CSI plugin. For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs. Note Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes. Procedure Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m # ... 4.14.10.5. Velero default item operation timeout defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h . Use the defaultItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature. When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout , it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections. Procedure Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h # ... 4.14.10.6. Item operation timeout - restore ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h . Use the restore ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h # ... 4.14.10.7. Item operation timeout - backup ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h . Use the backup ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h # ... 4.14.11. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.14.11.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.14.11.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. View the Velero backup details USD velero backup describe <backup-name> --details 4.14.11.3. Backup CR status remains in PartiallyFailed The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created. Cause If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following: time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq Solution Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp If required, clean up the stored data on the BackupStorageLocation to free up space. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object: USD oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true Create a new Backup CR. 4.14.12. Restic issues You might encounter these issues when you back up applications with Restic. 4.14.12.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.nodeAgent.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # ... spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1 # ... 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.14.12.2. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location? . Cause Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information. Solution Remove the related Restic repository from the namespace by running the following command: USD oc delete resticrepository openshift-adp <name_of_the_restic_repository> In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity. time="2021-12-29T18:29:14Z" level=info msg="1 errors encountered backup up item" backup=velero/backup65 logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds time="2021-12-29T18:29:14Z" level=error msg="Error backing up item" backup=velero/backup65 error="pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \n: exit status 1" error.file="/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184" error.function="github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds 4.14.12.3. Restic restore partially failing on OCP 4.14 due to changed PSA policy OpenShift Container Platform 4.14 enforces a Pod Security Admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. If a SecurityContextConstraints (SCC) resource is not found when a pod is created, and the PSA policy on the pod is not set up to meet the required standards, pod admission is denied. This issue arises due to the resource restore order of Velero. Sample error \"level=error\" in line#2273: time=\"2023-06-12T06:50:04Z\" level=error msg=\"error restoring mysql-869f9f44f6-tp5lv: pods\\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity\\\ "restricted:v1.24\\\": privil eged (container \\\"mysql\\\ " must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/restore/restore.go:1388\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n velero container contains \"level=error\" in line#2447: time=\"2023-06-12T06:50:05Z\" level=error msg=\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\ "mysql\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\",\\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n]", Solution In your DPA custom resource (CR), check or set the restore-resource-priorities field on the Velero server to ensure that securitycontextconstraints is listed in order before pods in the list of resources: USD oc get dpa -o yaml Example DPA CR # ... configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift 1 If you have an existing restore resource priority list, ensure you combine that existing list with the complete list. Ensure that the security standards for the application pods are aligned, as provided in Fixing PodSecurity Admission warnings for deployments , to prevent deployment warnings. If the application is not aligned with security standards, an error can occur regardless of the SCC. Note This solution is temporary, and ongoing discussions are in progress to address it. Additional resources Fixing PodSecurity Admission warnings for deployments 4.14.13. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can run the must-gather tool with the following data collection options: Full must-gather data collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. Essential must-gather data collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. must-gather data collection with timeout. Data collection can take a long time if there are many failed Backup CRs. You can improve performance by setting a timeout value. Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. You must use Red Hat Enterprise Linux (RHEL) 9.0 with OADP 1.3 and OADP 1.4. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: Full must-gather data collection, including Prometheus metrics: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . Essential must-gather data collection, without Prometheus metrics, for a specific time duration: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . must-gather data collection with timeout: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. Prometheus metrics data dump: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Additional resources Gathering cluster data 4.14.13.1. Using must-gather with insecure TLS connections If a custom CA certificate is used, the must-gather pod fails to grab the output for velero logs/describe . To use the must-gather tool with insecure TLS connections, you can pass the gather_without_tls flag to the must-gather command. Procedure Pass the gather_without_tls flag, with value set to true , to the must-gather tool by using the following command: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false> For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false> By default, the flag value is set to false . Set the value to true to allow insecure TLS connections. 4.14.13.2. Combining options when using the must-gather tool Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following examples: For OADP 1.3: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> For OADP 1.4: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> In these examples, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls . The only other variables that you can specify this way are the following: logs_since , with a default value of 72h request_timeout , with a default value of 0s If DataProtectionApplication custom resource (CR) is configured with s3Url and insecureSkipTLS: true , the CR does not collect the necessary logs because of a missing CA certificate. To collect those logs, run the must-gather command with the following option: For OADP 1.3: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true For OADP 1.4: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true 4.14.14. OADP Monitoring The OpenShift Container Platform provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs. Additional resources About OpenShift Container Platform monitoring 4.14.14.1. OADP monitoring setup The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end. With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics. Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have created a cluster monitoring config map. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring Add or enable the enableUserWorkload option in the data section's config.yaml field: apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata: # ... 1 Add this option or set to true Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace: USD oc get pods -n openshift-user-workload-monitoring Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring . If it exists, skip the remaining steps in this procedure. USD oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name: Example output apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the 2_configure_user_workload_monitoring.yaml file: USD oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created 4.14.14.2. Creating OADP service monitor OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service. Get details about the service by running the following commands: Procedure Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object. USD oc get svc -n openshift-adp -l app.kubernetes.io/name=velero Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml . The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides. Example ServiceMonitor object apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: "velero" Apply the 3_create_oadp_service_monitor.yaml file: USD oc apply -f 3_create_oadp_service_monitor.yaml Example output servicemonitor.monitoring.coreos.com/oadp-service-monitor created Verification Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console: Navigate to the Observe Targets page. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field. Verify that the status for the Status for the service monitor is Up . Figure 4.1. OADP metrics targets 4.14.14.3. Creating an alerting rule The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring. Procedure Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml . Sample OADPBackupFailing alert apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0 for: 5m labels: severity: warning In this sample, the Alert displays under the following conditions: There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes. If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace: USD oc apply -f 4_create_oadp_alert_rule.yaml Example output prometheusrule.monitoring.coreos.com/sample-oadp-alert created Verification After the Alert is triggered, you can view it in the following ways: In the Developer perspective, select the Observe menu. In the Administrator perspective under the Observe Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed. Figure 4.2. OADP backup failing alert Additional resources Managing alerts as an Administrator 4.14.14.4. List of available metrics These are the list of metrics provided by the OADP together with their Types . Metric name Description Type kopia_content_cache_hit_bytes Number of bytes retrieved from the cache Counter kopia_content_cache_hit_count Number of times content was retrieved from the cache Counter kopia_content_cache_malformed Number of times malformed content was read from the cache Counter kopia_content_cache_miss_count Number of times content was not found in the cache and fetched Counter kopia_content_cache_missed_bytes Number of bytes retrieved from the underlying storage Counter kopia_content_cache_miss_error_count Number of times content could not be found in the underlying storage Counter kopia_content_cache_store_error_count Number of times content could not be saved in the cache Counter kopia_content_get_bytes Number of bytes retrieved using GetContent() Counter kopia_content_get_count Number of times GetContent() was called Counter kopia_content_get_error_count Number of times GetContent() was called and the result was an error Counter kopia_content_get_not_found_count Number of times GetContent() was called and the result was not found Counter kopia_content_write_bytes Number of bytes passed to WriteContent() Counter kopia_content_write_count Number of times WriteContent() was called Counter velero_backup_attempt_total Total number of attempted backups Counter velero_backup_deletion_attempt_total Total number of attempted backup deletions Counter velero_backup_deletion_failure_total Total number of failed backup deletions Counter velero_backup_deletion_success_total Total number of successful backup deletions Counter velero_backup_duration_seconds Time taken to complete backup, in seconds Histogram velero_backup_failure_total Total number of failed backups Counter velero_backup_items_errors Total number of errors encountered during backup Gauge velero_backup_items_total Total number of items backed up Gauge velero_backup_last_status Last status of the backup. A value of 1 is success, 0. Gauge velero_backup_last_successful_timestamp Last time a backup ran successfully, Unix timestamp in seconds Gauge velero_backup_partial_failure_total Total number of partially failed backups Counter velero_backup_success_total Total number of successful backups Counter velero_backup_tarball_size_bytes Size, in bytes, of a backup Gauge velero_backup_total Current number of existent backups Gauge velero_backup_validation_failure_total Total number of validation failed backups Counter velero_backup_warning_total Total number of warned backups Counter velero_csi_snapshot_attempt_total Total number of CSI attempted volume snapshots Counter velero_csi_snapshot_failure_total Total number of CSI failed volume snapshots Counter velero_csi_snapshot_success_total Total number of CSI successful volume snapshots Counter velero_restore_attempt_total Total number of attempted restores Counter velero_restore_failed_total Total number of failed restores Counter velero_restore_partial_failure_total Total number of partially failed restores Counter velero_restore_success_total Total number of successful restores Counter velero_restore_total Current number of existent restores Gauge velero_restore_validation_failed_total Total number of failed restores failing validations Counter velero_volume_snapshot_attempt_total Total number of attempted volume snapshots Counter velero_volume_snapshot_failure_total Total number of failed volume snapshots Counter velero_volume_snapshot_success_total Total number of successful volume snapshots Counter 4.14.14.5. Viewing metrics using the Observe UI You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project. Procedure Navigate to the Observe Metrics page: If you are using the Developer perspective, follow these steps: Select Custom query , or click on the Show PromQL link. Type the query and click Enter . If you are using the Administrator perspective, type the expression in the text field and select Run Queries . Figure 4.3. OADP metrics query 4.15. APIs used with OADP The document provides information about the following APIs that you can use with OADP: Velero API OADP API 4.15.1. Velero API Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types . 4.15.2. OADP API The following tables provide the structure of the OADP API: Table 4.8. DataProtectionApplicationSpec Property Type Description backupLocations [] BackupLocation Defines the list of configurations to use for BackupStorageLocations . snapshotLocations [] SnapshotLocation Defines the list of configurations to use for VolumeSnapshotLocations . unsupportedOverrides map [ UnsupportedImageKey ] string Can be used to override the deployed dependent images for development. Options are veleroImageFqin , awsPluginImageFqin , openshiftPluginImageFqin , azurePluginImageFqin , gcpPluginImageFqin , csiPluginImageFqin , dataMoverImageFqin , resticRestoreImageFqin , kubevirtPluginImageFqin , and operator-type . podAnnotations map [ string ] string Used to add annotations to pods deployed by Operators. podDnsPolicy DNSPolicy Defines the configuration of the DNS of a pod. podDnsConfig PodDNSConfig Defines the DNS parameters of a pod in addition to those generated from DNSPolicy . backupImages * bool Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. configuration * ApplicationConfig Used to define the data protection application's server configuration. features * Features Defines the configuration for the DPA to enable the Technology Preview features. Complete schema definitions for the OADP API . Table 4.9. BackupLocation Property Type Description velero * velero.BackupStorageLocationSpec Location to store volume snapshots, as described in Backup Storage Location . bucket * CloudStorageLocation [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. Important The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Complete schema definitions for the type BackupLocation . Table 4.10. SnapshotLocation Property Type Description velero * VolumeSnapshotLocationSpec Location to store volume snapshots, as described in Volume Snapshot Location . Complete schema definitions for the type SnapshotLocation . Table 4.11. ApplicationConfig Property Type Description velero * VeleroConfig Defines the configuration for the Velero server. restic * ResticConfig Defines the configuration for the Restic server. Complete schema definitions for the type ApplicationConfig . Table 4.12. VeleroConfig Property Type Description featureFlags [] string Defines the list of features to enable for the Velero instance. defaultPlugins [] string The following types of default Velero plugins can be installed: aws , azure , csi , gcp , kubevirt , and openshift . customPlugins [] CustomPlugin Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins restoreResourcesVersionPriority string Represents a config map that is created if defined for use in conjunction with the EnableAPIGroupVersions feature flag. Defining this field automatically adds EnableAPIGroupVersions to the Velero server feature flag. noDefaultBackupLocation bool To install Velero without a default backup storage location, you must set the noDefaultBackupLocation flag in order to confirm installation. podConfig * PodConfig Defines the configuration of the Velero pod. logLevel string Velero server's log level (use debug for the most granular logging, leave unset for Velero default). Valid options are trace , debug , info , warning , error , fatal , and panic . Complete schema definitions for the type VeleroConfig . Table 4.13. CustomPlugin Property Type Description name string Name of custom plugin. image string Image of custom plugin. Complete schema definitions for the type CustomPlugin . Table 4.14. ResticConfig Property Type Description enable * bool If set to true , enables backup and restore using Restic. If set to false , snapshots are needed. supplementalGroups [] int64 Defines the Linux groups to be applied to the Restic pod. timeout string A user-supplied duration string that defines the Restic timeout. Default value is 1hr (1 hour). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . podConfig * PodConfig Defines the configuration of the Restic pod. Complete schema definitions for the type ResticConfig . Table 4.15. PodConfig Property Type Description nodeSelector map [ string ] string Defines the nodeSelector to be supplied to a Velero podSpec or a Restic podSpec . For more details, see Configuring node agents and node labels . tolerations [] Toleration Defines the list of tolerations to be applied to a Velero deployment or a Restic daemonset . resourceAllocations ResourceRequirements Set specific resource limits and requests for a Velero pod or a Restic pod as described in Setting Velero CPU and memory resource allocations . labels map [ string ] string Labels to add to pods. 4.15.2.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" Complete schema definitions for the type PodConfig . Table 4.16. Features Property Type Description dataMover * DataMover Defines the configuration of the Data Mover. Complete schema definitions for the type Features . Table 4.17. DataMover Property Type Description enable bool If set to true , deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to false , these are not deployed. credentialName string User-supplied Restic Secret name for Data Mover. timeout string A user-supplied duration string for VolumeSnapshotBackup and VolumeSnapshotRestore to complete. Default is 10m (10 minutes). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . The OADP API is more fully detailed in OADP Operator . 4.16. Advanced OADP features and functionalities This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP). 4.16.1. Working with different Kubernetes API versions on the same cluster 4.16.1.1. Listing the Kubernetes API group versions on a cluster A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups. If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API. To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1 . Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster. Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources. Procedure Enter the following command: USD oc api-resources 4.16.1.2. About Enable API Group Versions By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions , that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API. Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example , which is example.com/v1 . With the feature enabled, Velero also backs up example.com/v1beta2 . When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions. Note Enable API Group Versions is still in beta. Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority: Preferred version of the destination cluster Preferred version of the source_ cluster Common non-preferred supported version with the highest Kubernetes version priority Additional resources Enable API Group Versions Feature 4.16.1.3. Using Enable API Group Versions You can use Velero's Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one. Note Enable API Group Versions is still in beta. Procedure Configure the EnableAPIGroupVersions feature flag: apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication ... spec: configuration: velero: featureFlags: - EnableAPIGroupVersions Additional resources Enable API Group Versions Feature 4.16.2. Backing up data from one cluster and restoring it to another cluster 4.16.2.1. About backing up data from one cluster and restoring it on another cluster OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster. You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster. To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster: Operators Use of Velero UID and GID ranges 4.16.2.1.1. Operators You must exclude Operators from the backup of an application for backup and restore to succeed. 4.16.2.1.2. Use of Velero Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots. Note In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic . In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup . You must also use Velero's File System Backup to migrate data between AWS regions or between Microsoft Azure regions. Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster. It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources. 4.16.2.2. About determining which pod volumes to back up Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes. Velero supports two approaches for determining pod volumes. Use the opt-in or the opt-out approach to allow Velero to decide between an FSB, a volume snapshot, or a Data Mover backup. Opt-in approach : With the opt-in approach, volumes are backed up using snapshot or Data Mover by default. FSB is used on specific volumes that are opted-in by annotations. Opt-out approach : With the opt-out approach, volumes are backed up using FSB by default. Snapshots or Data Mover is used on specific volumes that are opted-out by annotations. 4.16.2.2.1. Limitations FSB does not support backing up and restoring hostpath volumes. However, FSB does support backing up and restoring local volumes. Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data . It is essential that you limit access to backup storage. For PVCs, every incremental backup chain is maintained across pod reschedules. For pod volumes that are not PVCs, such as emptyDir volumes, if a pod is deleted or recreated, for example, by a ReplicaSet or a deployment, the backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod. Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up. FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup.. FSB expects volumes to be mounted under <hostPath>/<pod UID> , with <hostPath> being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the <pod UID> subdirectory, and VFSB does not work with them as expected. 4.16.2.2.2. Backing up pod volumes by using the opt-in method You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command. Procedure On each pod that contains one or more volumes that you want to back up, enter the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. 4.16.2.2.3. Backing up pod volumes by using the opt-out method When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions: Volumes that mount the default service account token, secrets, and configuration maps. hostPath volumes You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command. Procedure On each pod that contains one or more volumes that you do not want to back up, run the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. Note You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag. 4.16.2.3. UID and GID ranges If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations: Summary of the issues The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift's UID and GID ranges, see A Guide to OpenShift and UIDs . Detailed description of the issues When you create a namespace in OpenShift Container Platform by using the shell command oc create namespace , OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in the metadata.annotations field of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true: There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore. A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace. This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a securityContext UID to a pod based on namespace annotations that have changed since the persistent volume data was backed up. The UID of the container no longer matches the UID of the file owner. An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster. Mitigations You can use one or more of the following mitigations to resolve the UID and GID range issues: Simple mitigations: If you use a label selector in the Backup CR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name. Advanced mitigations: Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration . Step 1 is optional. For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs . 4.16.2.4. Backing up data from one cluster and restoring it to another cluster In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another. Prerequisites All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide. Procedure Make the following additions to the procedures given for your platform: Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster. Share the same object storage location credentials across the clusters. For best results, use OADP to create the namespace on the destination cluster. If you use the Velero file-system-backup option, enable the --default-volumes-to-fs-backup flag for use during backup by running the following command: USD velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options> Note In OADP 1.2 and later, the Velero Restic option is called file-system-backup . Important Before restoring a CSI back up, edit the VolumeSnapshotClass custom resource (CR), and set the snapshot.storage.kubernetes.io/is-default-class parameter to false. Otherwise, the restore will partially fail due to the same value in the VolumeSnapshotClass in the target cluster for the same drive. 4.16.3. OADP storage class mapping 4.16.3.1. Storage class mapping Storage class mapping allows you to define rules or policies specifying which storage class should be applied to different types of data. This feature automates the process of determining storage classes based on access frequency, data importance, and cost considerations. It optimizes storage efficiency and cost-effectiveness by ensuring that data is stored in the most suitable storage class for its characteristics and usage patterns. You can use the change-storage-class-config field to change the storage class of your data objects, which lets you optimize costs and performance by moving data between different storage tiers, such as from standard to archival storage, based on your needs and access patterns. 4.16.3.1.1. Storage class mapping with Migration Toolkit for Containers You can use the Migration Toolkit for Containers (MTC) to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster and for storage class mapping and conversion. You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the MTC web console. 4.16.3.1.2. Mapping storage classes with OADP You can use OpenShift API for Data Protection (OADP) with the Velero plugin v1.1.0 and later to change the storage class of a persistent volume (PV) during restores, by configuring a storage class mapping in the config map in the Velero namespace. To deploy ConfigMap with OADP, use the change-storage-class-config field. You must change the storage class mapping based on your cloud provider. Procedure Change the storage class mapping by running the following command: USD cat change-storageclass.yaml Create a config map in the Velero namespace as shown in the following example: Example apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: "" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi Save your storage class mapping preferences by running the following command: USD oc create -f change-storage-class-config 4.16.4. Additional resources Working with different Kubernetes API versions on the same cluster . Using Data Mover for CSI snapshots . Backing up applications with File System Backup: Kopia or Restic . Migration converting storage classes . | [
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"spec: configuration: nodeAgent: enable: true uploaderType: restic",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\":2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/backup_and_restore/oadp-application-backup-and-restore |
Chapter 18. Persistently mounting file systems | Chapter 18. Persistently mounting file systems As a system administrator, you can persistently mount file systems to configure non-removable storage. 18.1. The /etc/fstab file Use the /etc/fstab configuration file to control persistent mount points of file systems. Each line in the /etc/fstab file defines a mount point of a file system. It includes six fields separated by white space: The block device identified by a persistent attribute or a path in the /dev directory. The directory where the device will be mounted. The file system on the device. Mount options for the file system, which includes the defaults option to mount the partition at boot time with default options. The mount option field also recognizes the systemd mount unit options in the x-systemd. option format. Backup option for the dump utility. Check order for the fsck utility. Note The systemd-fstab-generator dynamically converts the entries from the /etc/fstab file to the systemd-mount units. The systemd auto mounts LVM volumes from /etc/fstab during manual activation unless the systemd-mount unit is masked. Note The dump utility used for backup of file systems has been removed in RHEL 9, and is available in the EPEL 9 repository. Example 18.1. The /boot file system in /etc/fstab Block device Mount point File system Options Backup Check UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /boot xfs defaults 0 0 The systemd service automatically generates mount units from entries in /etc/fstab . Additional resources fstab(5) and systemd.mount(5) man pages on your system 18.2. Adding a file system to /etc/fstab Configure persistent mount point for a file system in the /etc/fstab configuration file. Procedure Find out the UUID attribute of the file system: For example: Example 18.2. Viewing the UUID of a partition If the mount point directory does not exist, create it: As root, edit the /etc/fstab file and add a line for the file system, identified by the UUID. For example: Example 18.3. The /boot mount point in /etc/fstab Regenerate mount units so that your system registers the new configuration: Try mounting the file system to verify that the configuration works: Additional resources Overview of persistent naming attributes | [
"lsblk --fs storage-device",
"lsblk --fs /dev/sda1 NAME FSTYPE LABEL UUID MOUNTPOINT sda1 xfs Boot ea74bbec-536d-490c-b8d9-5b40bbd7545b /boot",
"mkdir --parents mount-point",
"UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /boot xfs defaults 0 0",
"systemctl daemon-reload",
"mount mount-point"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/assembly_persistently-mounting-file-systems_managing-file-systems |
Chapter 11. Optimizing the system performance using the web console | Chapter 11. Optimizing the system performance using the web console Learn how to set a performance profile in the RHEL web console to optimize the performance of the system for a selected task. 11.1. Performance tuning options in the web console Red Hat Enterprise Linux 8 provides several performance profiles that optimize the system for the following tasks: Systems using the desktop Throughput performance Latency performance Network performance Low power consumption Virtual machines The TuneD service optimizes system options to match the selected profile. In the web console, you can set which performance profile your system uses. Additional resources Getting started with TuneD 11.2. Setting a performance profile in the web console Depending on the task you want to perform, you can use the web console to optimize system performance by setting a suitable performance profile. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Overview . In the Configuration section, click the current performance profile. In the Change Performance Profile dialog box, set the required profile. Click Change Profile . Verification The Overview tab now shows the selected performance profile in the Configuration section. 11.3. Monitoring performance on the local system by using the web console Red Hat Enterprise Linux web console uses the Utilization Saturation and Errors (USE) Method for troubleshooting. The new performance metrics page has a historical view of your data organized chronologically with the newest data at the top. In the Metrics and history page, you can view events, errors, and graphical representation for resource utilization and saturation. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-pcp package, which enables collecting the performance metrics, is installed. The Performance Co-Pilot (PCP) service is enabled: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Overview . In the Usage section, click View metrics and history . The Metrics and history section opens: The current system configuration and usage: The performance metrics in a graphical form over a user-specified time interval: 11.4. Monitoring performance on several systems by using the web console and Grafana Grafana enables you to collect data from several systems at once and review a graphical representation of their collected Performance Co-Pilot (PCP) metrics. You can set up performance metrics monitoring and export for several systems in the web console interface. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the cockpit-pcp package. You have enabled the PCP service: You have set up the Grafana dashboard. For more information, see Setting up a grafana-server . You have installed the redis package. Alternatively, you can install the package from the web console interface later in the procedure. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Overview page, click View metrics and history in the Usage table. Click the Metrics settings button. Move the Export to network slider to active position. If you do not have the redis package installed, the web console prompts you to install it. To open the pmproxy service, select a zone from a drop-down list and click the Add pmproxy button. Click Save . Verification Click Networking . In the Firewall table, click the Edit rules and zones button. Search for pmproxy in your selected zone. Important Repeat this procedure on all the systems you want to watch. Additional resources Setting up graphical representation of PCP metrics | [
"systemctl enable --now pmlogger.service pmproxy.service",
"systemctl enable --now pmlogger.service pmproxy.service"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/using-the-web-console-for-selecting-performance-profiles_monitoring-and-managing-system-status-and-performance |
Chapter 8. Verifying connectivity to an endpoint | Chapter 8. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 8.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 8.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. You can configure the nodes which network connectivity sources and targets run on with a node selector. Additionally, you can specify permissible tolerations for source and target pods. The configuration is defined in the singleton cluster custom resource of the Network API in the config.openshift.io/v1 API group. Pod scheduling occurs after you have updated the configuration. Therefore, you must apply node labels that you intend to use in your selectors before updating the configuration. Labels applied after updating your network connectivity check pod placement are ignored. Refer to the default configuration in the following YAML: Default configuration for connectivity source and target pods apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... networkDiagnostics: 1 mode: "All" 2 sourcePlacement: 3 nodeSelector: checkNodes: groupA tolerations: - key: myTaint effect: NoSchedule operator: Exists targetPlacement: 4 nodeSelector: checkNodes: groupB tolerations: - key: myOtherTaint effect: NoExecute operator: Exists 1 1 Specifies the network diagnostics configuration. If a value is not specified or an empty object is specified, and spec.disableNetworkDiagnostics=true is set in the network.operator.openshift.io custom resource named cluster , network diagnostics are disabled. If set, this value overrides spec.disableNetworkDiagnostics=true . 2 Specifies the diagnostics mode. The value can be the empty string, All , or Disabled . The empty string is equivalent to specifying All . 3 Optional: Specifies a selector for connectivity check source pods. You can use the nodeSelector and tolerations fields to further specify the sourceNode pods. You do not have to use both nodeSelector and tolerations , however, for both the source and target pods. These are optional fields that can be omitted. 4 Optional: Specifies a selector for connectivity check target pods. You can use the nodeSelector and tolerations fields to further specify the targetNode pods. You do not have to use both nodeSelector and tolerations , however, for both the source and target pods. These are optional fields that can be omitted. 8.3. Configuring pod connectivity check placement As a cluster administrator, you can configure which nodes the connectivity check pods run by modifying the network.config.openshift.io object named cluster . Prerequisites Install the OpenShift CLI ( oc ). Procedure To edit the connectivity check configuration, enter the following command: USD oc edit network.config.openshift.io cluster In the text editor, update the networkDiagnostics stanza to specify the node selectors that you want for the source and target pods. To commit your changes, save your changes and exit the text editor. Verification To verify that the source and target pods are running on the intended nodes, enter the following command: USD oc get pods -n openshift-network-diagnostics -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES network-check-source-84c69dbd6b-p8f7n 1/1 Running 0 9h 10.131.0.8 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-46pct 1/1 Running 0 9h 10.131.0.6 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-8kwgf 1/1 Running 0 9h 10.128.2.4 ip-10-0-95-74.us-east-2.compute.internal <none> <none> network-check-target-jc6n7 1/1 Running 0 9h 10.129.2.4 ip-10-0-21-151.us-east-2.compute.internal <none> <none> network-check-target-lvwnn 1/1 Running 0 9h 10.128.0.7 ip-10-0-17-129.us-east-2.compute.internal <none> <none> network-check-target-nslvj 1/1 Running 0 9h 10.130.0.7 ip-10-0-89-148.us-east-2.compute.internal <none> <none> network-check-target-z2sfx 1/1 Running 0 9h 10.129.0.4 ip-10-0-60-253.us-east-2.compute.internal <none> <none> 8.4. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 8.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 8.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 8.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 8.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 8.5. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod, and verify that network diagnostics is enabled. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To confirm that network diagnostics are enabled, enter the following command: USD oc get network.config.openshift.io cluster -o yaml Example output # ... status: # ... conditions: - lastTransitionTime: "2024-05-27T08:28:39Z" message: "" reason: AsExpected status: "True" type: NetworkDiagnosticsAvailable To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z" | [
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: # networkDiagnostics: 1 mode: \"All\" 2 sourcePlacement: 3 nodeSelector: checkNodes: groupA tolerations: - key: myTaint effect: NoSchedule operator: Exists targetPlacement: 4 nodeSelector: checkNodes: groupB tolerations: - key: myOtherTaint effect: NoExecute operator: Exists",
"oc edit network.config.openshift.io cluster",
"oc get pods -n openshift-network-diagnostics -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES network-check-source-84c69dbd6b-p8f7n 1/1 Running 0 9h 10.131.0.8 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-46pct 1/1 Running 0 9h 10.131.0.6 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-8kwgf 1/1 Running 0 9h 10.128.2.4 ip-10-0-95-74.us-east-2.compute.internal <none> <none> network-check-target-jc6n7 1/1 Running 0 9h 10.129.2.4 ip-10-0-21-151.us-east-2.compute.internal <none> <none> network-check-target-lvwnn 1/1 Running 0 9h 10.128.0.7 ip-10-0-17-129.us-east-2.compute.internal <none> <none> network-check-target-nslvj 1/1 Running 0 9h 10.130.0.7 ip-10-0-89-148.us-east-2.compute.internal <none> <none> network-check-target-z2sfx 1/1 Running 0 9h 10.129.0.4 ip-10-0-60-253.us-east-2.compute.internal <none> <none>",
"oc get network.config.openshift.io cluster -o yaml",
"status: # conditions: - lastTransitionTime: \"2024-05-27T08:28:39Z\" message: \"\" reason: AsExpected status: \"True\" type: NetworkDiagnosticsAvailable",
"oc get podnetworkconnectivitycheck -n openshift-network-diagnostics",
"NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m",
"oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml",
"apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/verifying-connectivity-endpoint |
probe::scheduler.cpu_on | probe::scheduler.cpu_on Name probe::scheduler.cpu_on - Process is beginning execution on a cpu Synopsis scheduler.cpu_on Values idle - boolean indicating whether current is the idle process task_prev the process that was previously running on this cpu name name of the probe point Context The resuming process. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-cpu-on |
14.8.19. wbinfo | 14.8.19. wbinfo wbinfo <options> The wbinfo program displays information from the winbindd daemon. The winbindd daemon must be running for wbinfo to work. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-wbinfo |
Chapter 1. Authenticating with the Guest user | Chapter 1. Authenticating with the Guest user To explore Developer Hub features, you can skip configuring authentication and authorization. You can configure Developer Hub to log in as a Guest user and access Developer Hub features. 1.1. Authenticating with the Guest user on an Operator-based installation After an Operator-based installation, you can configure Developer Hub to log in as a Guest user and access Developer Hub features. Prerequisites You installed Developer Hub by using the Operator . You added a custom Developer Hub application configuration , and have sufficient permissions to modify it. Procedure To enable the guest user in your Developer Hub custom configuration, edit your Developer Hub application configuration with following content: app-config.yaml fragment auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true Verification Go to the Developer Hub login page. To log in with the Guest user account, click Enter in the Guest tile. In the Developer Hub Settings page, your profile name is Guest . You can use Developer Hub features. 1.2. Authenticating with the Guest user on a Helm-based installation On a Helm-based installation, you can configure Developer Hub to log in as a Guest user and access Developer Hub features. Prerequisites You added a custom Developer Hub application configuration , and have sufficient permissions to modify it. You use the Red Hat Developer Hub Helm chart to run Developer Hub . Procedure To enable the guest user in your Developer Hub custom configuration, configure your Red Hat Developer Hub Helm Chart with following content: Red Hat Developer Hub Helm Chart configuration fragment upstream: backstage: appConfig: app: baseUrl: 'https://{{- include "janus-idp.hostname" . }}' auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true Verification Go to the Developer Hub login page. To log in with the Guest user account, click Enter in the Guest tile. In the Developer Hub Settings page, your profile name is Guest . You can use Developer Hub features. | [
"auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true",
"upstream: backstage: appConfig: app: baseUrl: 'https://{{- include \"janus-idp.hostname\" . }}' auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authentication/authenticating-with-the-guest-user_title-authentication |
Chapter 22. Utilities | Chapter 22. Utilities 22.1. The oVirt Engine Rename Tool 22.1.1. The oVirt Engine Rename Tool When the engine-setup command is run in a clean environment, the command generates a number of certificates and keys that use the fully qualified domain name of the Manager supplied during the setup process. If the fully qualified domain name of the Manager must be changed later on (for example, due to migration of the machine hosting the Manager to a different domain), the records of the fully qualified domain name must be updated to reflect the new name. The ovirt-engine-rename command automates this task. The ovirt-engine-rename command updates records of the fully qualified domain name of the Manager in the following locations: /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf /etc/pki/ovirt-engine/cert.conf /etc/pki/ovirt-engine/cert.template /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.p12 Warning While the ovirt-engine-rename command creates a new certificate for the web server on which the Manager runs, it does not affect the certificate for the Manager or the certificate authority. Due to this, there is some risk involved in using the ovirt-engine-rename command, particularly in environments that have been upgraded from Red Hat Enterprise Virtualization 3.2 and earlier. Therefore, changing the fully qualified domain name of the Manager by running engine-cleanup and engine-setup is recommended where possible. Warning During the upgrade process, the old hostname must be resolvable. If the oVirt Engine Rename Tool fails with the message [ ERROR ] Host name is not valid: <OLD FQDN> did not resolve into an IP address , add the old hostname to the /etc/hosts file, use the oVirt Engine Rename Tool, and then remove the old hostname from the /etc/hosts file. 22.1.2. Syntax for the oVirt Engine Rename Command The basic syntax for the ovirt-engine-rename command is: The command also accepts the following options: --newname= [new name] Allows you to specify the new fully qualified domain name for the Manager without user interaction. --log= [file] Allows you to specify the path and name of a file into which logs of the rename operation are to be written. --config= [file] Allows you to specify the path and file name of a configuration file to load into the rename operation. --config-append= [file] Allows you to specify the path and file name of a configuration file to append to the rename operation. This option can be used to specify the path and file name of an existing answer file to automate the rename operation. --generate-answer= [file] Allows you to specify the path and file name of the file in which your answers and the values changed by the ovirt-engine-rename command are recorded. 22.1.3. Renaming the Manager with the oVirt Engine Rename Tool You can use the ovirt-engine-rename command to update records of the fully qualified domain name (FQDN) of the Manager. Important The ovirt-engine-rename command does not update SSL certificates, such as imageio-proxy or websocket-proxy . These must be updated manually, after running ovirt-engine-rename . See Updating SSL Certificates below. The tool checks whether the Manager provides a local ISO or Data storage domain. If it does, the tool prompts the user to eject, shut down, or place into maintenance mode any virtual machine or storage domain connected to the storage before continuing with the operation. This ensures that virtual machines do not lose connectivity with their virtual disks, and prevents ISO storage domains from losing connectivity during the renaming process. Using the oVirt Engine Rename Tool Prepare all DNS and other relevant records for the new FQDN. Update the DHCP server configuration if DHCP is used. Update the host name on the Manager. Run the following command: When prompted, press Enter to stop the engine service: When prompted, enter the new FQDN for the Manager: The ovirt-engine-rename command updates records of the FQDN of the Manager. For a self-hosted engine, complete these additional steps: Run the following command on every existing self-hosted engine node: This command modifies the FQDN in each self-hosted engine node's local copy of /etc/ovirt-hosted-engine-ha/hosted-engine.conf Run the following command on one of the self-hosted engine nodes: This command modifies the FQDN in the master copy of /etc/ovirt-hosted-engine-ha/hosted-engine.conf on the shared storage domain. Now, all new and existing self-hosted engine nodes use the new FQDN. Updating SSL Certificates Run the following commands after the ovirt-engine-rename command to update the SSL certificates: | [
"/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename",
"/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename",
"During execution engine service will be stopped (OK, Cancel) [OK]:",
"New fully qualified server name: new_engine_fqdn",
"hosted-engine --set-shared-config fqdn new_engine_fqdn --type=he_local",
"hosted-engine --set-shared-config fqdn new_engine_fqdn --type=he_shared",
"1. # names=\"websocket-proxy imageio-proxy\"",
"2. # subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/apache.cer -noout -subject | sed 's;subject= \\(.*\\);\\1;' )\"",
"3. # . /usr/share/ovirt-engine/bin/engine-prolog.sh",
"4. # for name in USDnames; do /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass --subject=\"USD{subject}\" --keep-key --san=DNS:\"USD{ENGINE_FQDN}\" done"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-Utilities |
Chapter 50. JDBC | Chapter 50. JDBC Since Camel 1.2 Only producer is supported The JDBC component enables you to access databases through JDBC, where SQL queries (SELECT) and operations (INSERT, UPDATE, etc) are sent in the message body. This component uses the standard JDBC API. Note Prerequisites This component does not support transactions out of the box. For transactions, we recommend using the Spring JDBC Component instead. 50.1. Dependencies When using jdbc with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jdbc-starter</artifactId> </dependency> 50.2. URI Format jdbc:dataSourceName[?options] 50.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 50.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example, a component may have security settings, credentials for authentication, URLs for network connection. Components have pre-configured defaults that are commonly used. hence, you need to configure only a few options on a component or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 50.3.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. When configuring options is to use Property Placeholders , which allows to not hard-code urls, port numbers, sensitive information, and other settings. Placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 50.4. Component Options The JDBC component supports 4 options, which are listed below. Name Description Default Type dataSource (producer) To use the DataSource instance instead of looking up the data source by name from the registry. DataSource lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (adcanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean connectionStrategy (advanced) To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. ConnectionStrategy 50.5. Endpoint Options The JDBC endpoint is configured using URI syntax: with the following path and query parameter: 50.5.1. Path Parameter (1 parameter) Name Description Default Type dataSourceName (producer) Required Name of DataSource to lookup in the Registry. If the name is dataSource or default, then Camel will attempt to lookup a default DataSource from the registry, meaning if there is a only one instance of DataSource found, then this DataSource will be used. String 50.5.2. Query Parameters (14 parameters) Name Description Default Type allowNamedParameters (producer) Whether to allow using named parameters in the queries. True Boolean outputClass (producer) Specify the full package and class name to use as conversion when outputType=SelectOne or SelectList. String outputType (producer) Determines the output the producer should use. Enum values: SelectOne SelectList StreamList SelectList JdbcOutputType parameters (producer) Optional parameters to the java.sql.Statement. For example to set maxRows, fetchSize etc. Map readSize (producer) The default maximum number of rows that can be read by a polling query. The default value is 0. int resetAutoCommit (producer) Camel will set the autoCommit on the JDBC connection to be false, commit the change after executed the statement and reset the autoCommit flag of the connection at the end, if the resetAutoCommit is true. If the JDBC connection doesn't support to reset the autoCommit flag, you can set the resetAutoCommit flag to be false, and Camel will not try to reset the autoCommit flag. When used with XA transactions you most likely need to set it to false so that the transaction manager is in charge of committing this tx. True Boolean transacted (producer) Whether transactions are in use. False Boolean useGetBytesForBlob (producer) To read BLOB columns as bytes instead of string data. This may be needed for certain databases such as Oracle where you must read BLOB columns as bytes. False Boolean useHeadersAsParameters (producer) Set this option to true to use the prepareStatementStrategy with named parameters. This allows to define queries with named placeholders, and use headers with the dynamic values for the query placeholders. False Boolean useJDBC4ColumnNameAndLabelSemantics (producer) Sets whether to use JDBC 4 or JDBC 3.0 or older semantic when retrieving column name. JDBC 4.0 uses columnLabel to get the column name where as JDBC 3.0 uses both columnName or columnLabel. Unfortunately JDBC drivers behave differently so you can use this option to work out issues around your JDBC driver if you get problem using this component This option is default true. True Boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean beanRowMapper (advanced) To use a custom org.apache.camel.component.jdbc.BeanRowMapper when using outputClass. The default implementation will lower case the row names and skip underscores, and dashes. For example CUST_ID is mapped as custId. BeanRowMapper connectionStrategy (advanced) To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. ConnectionStrategy prepareStatementStrategy (advanced) Allows the plugin to use a custom org.apache.camel.component.jdbc.JdbcPrepareStatementStrategy to control preparation of the query and prepared statement. JdbcPrepareStatementStrategy 50.6. Message Headers The JBBC component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelJdbcUpdateCount (producer) Constant: JDBC_UPDATE_COUNT If the query is an UPDATE, query the update count is returned in this OUT header. int CamelJdbcRowCount (producer) Constant: JDBC_ROW_COUNT If the query is a SELECT, query the row count is returned in this OUT header. int CamelJdbcColumnNames (producer) Constant: JDBC_COLUMN_NAMES The column names from the ResultSet as a java.util.Set type. Set CamelJdbcParameters (producer) Constant: JDBC_PARAMETERS A java.util.Map which has the headers to be used if useHeadersAsParameters has been enabled. Map CamelRetrieveGeneratedKeys (producer) Constant: JDBC_RETRIEVE_GENERATED_KEYS Set its value to true to retrieve generated keys. False Boolean CamelGeneratedColumns (producer) Constant: JDBC_GENERATED_COLUMNS Set it to specify the expected generated columns. String[] or int[] CamelGeneratedKeysRowCount (producer) Constant: JDBC_GENERATED_KEYS_ROW_COUNT The number of rows in the header that contains generated keys. int CamelGeneratedKeysRows (producer) Constant: JDBC_GENERATED_KEYS_DATA Rows that contains the generated keys. List 50.7. Result By default the result is returned in the OUT body as an ArrayList<HashMap<String, Object>> . The List object contains the list of rows and the Map objects contain each row with the String key as the column name. You can use the option outputType to control the result. Note This component fetches ResultSetMetaData to be able to return the column name as the key in the Map . 50.8. Generated Keys If you insert data using SQL INSERT , then the RDBMS may support auto generated keys. You can instruct the JDBC producer to return the generated keys in headers. To do that set the header CamelRetrieveGeneratedKeys=true . Then the generated keys will be provided as headers with the keys listed in the table above. Note Using generated keys does not work together with named parameters. 50.9. Using named parameters In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :?lic and :?min. Camel will then lookup these parameters from the message headers. Notice in the example above we set two headers with constant value for the named parameters: from("direct:projects") .setHeader(":?lic", constant("ASF")) .setHeader(":?min", constant(123)) .setBody(simple("select * from projects where license = :?lic and id > :?min order by id")) .to("jdbc:myDataSource?useHeadersAsParameters=true") You can also store the header values in a java.util.Map and store the map on the headers with the key CamelJdbcParameters . 50.10. Samples In the following example, we setup the DataSource that camel-jdbc requires. First we register our datasource in the Camel registry as testdb : EmbeddedDatabase db = new EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.DERBY).addScript("sql/init.sql").build(); CamelContext context = ... context.getRegistry().bind("testdb", db); Then we configure a route that routes to the JDBC component, so the SQL will be executed. Note how we refer to the testdb datasource that was bound in the step: from("direct:hello") .to("jdbc:testdb"); We create an endpoint, add the SQL query to the body of the IN message, and then send the exchange. The result of the query is returned in the OUT body: Endpoint endpoint = context.getEndpoint("direct:hello"); Exchange exchange = endpoint.createExchange(); // then we set the SQL on the in body exchange.getMessage().setBody("select * from customer order by ID"); // now we send the exchange to the endpoint, and receives the response from Camel Exchange out = template.send(endpoint, exchange); If you want to work on the rows one by one instead of the entire ResultSet at once you need to use the Splitter EIP such as: from("direct:hello") // here we split the data from the testdb into new messages one by one // so the mock endpoint will receive a message per row in the table // the StreamList option allows to stream the result of the query without creating a List of rows // and notice we also enable streaming mode on the splitter .to("jdbc:testdb?outputType=StreamList") .split(body()).streaming() .to("mock:result"); 50.11. Sample - Polling the database every minute If we want to poll a database using the JDBC component, we need to combine it with a polling scheduler such as the Timer or Quartz etc. In the following example, we retrieve data from the database every 60 seconds: from("timer://foo?period=60000") .setBody(constant("select * from customer")) .to("jdbc:testdb") .to("activemq:queue:customers"); 50.12. Sample - Move data between data sources A common use case is to query for data, process it and move it to another data source (ETL operations). In the following example, we retrieve new customer records from the source table every hour, filter/transform them and move them to a destination table: from("timer://MoveNewCustomersEveryHour?period=3600000") .setBody(constant("select * from customer where create_time > (sysdate-1/24)")) .to("jdbc:testdb") .split(body()) .process(new MyCustomerProcessor()) //filter/transform results as needed .setBody(simple("insert into processed_customer values('USD{body[ID]}','USD{body[NAME]}')")) .to("jdbc:testdb"); 50.13. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.jdbc.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatically configure JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean camel.component.jdbc.connection-strategy To use a custom strategy for working with connections. Do not use a custom strategy when using the spring-jdbc component because a special Spring ConnectionStrategy is used by default to support Spring Transactions. The option is a org.apache.camel.component.jdbc.ConnectionStrategy type. ConnectionStrategy camel.component.jdbc.enabled Whether to enable auto configuration of the jdbc component. This is enabled by default. Boolean camel.component.jdbc.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jdbc-starter</artifactId> </dependency>",
"jdbc:dataSourceName[?options]",
"jdbc:dataSourceName",
"from(\"direct:projects\") .setHeader(\":?lic\", constant(\"ASF\")) .setHeader(\":?min\", constant(123)) .setBody(simple(\"select * from projects where license = :?lic and id > :?min order by id\")) .to(\"jdbc:myDataSource?useHeadersAsParameters=true\")",
"EmbeddedDatabase db = new EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.DERBY).addScript(\"sql/init.sql\").build(); CamelContext context = context.getRegistry().bind(\"testdb\", db);",
"from(\"direct:hello\") .to(\"jdbc:testdb\");",
"Endpoint endpoint = context.getEndpoint(\"direct:hello\"); Exchange exchange = endpoint.createExchange(); // then we set the SQL on the in body exchange.getMessage().setBody(\"select * from customer order by ID\"); // now we send the exchange to the endpoint, and receives the response from Camel Exchange out = template.send(endpoint, exchange);",
"from(\"direct:hello\") // here we split the data from the testdb into new messages one by one // so the mock endpoint will receive a message per row in the table // the StreamList option allows to stream the result of the query without creating a List of rows // and notice we also enable streaming mode on the splitter .to(\"jdbc:testdb?outputType=StreamList\") .split(body()).streaming() .to(\"mock:result\");",
"from(\"timer://foo?period=60000\") .setBody(constant(\"select * from customer\")) .to(\"jdbc:testdb\") .to(\"activemq:queue:customers\");",
"from(\"timer://MoveNewCustomersEveryHour?period=3600000\") .setBody(constant(\"select * from customer where create_time > (sysdate-1/24)\")) .to(\"jdbc:testdb\") .split(body()) .process(new MyCustomerProcessor()) //filter/transform results as needed .setBody(simple(\"insert into processed_customer values('USD{body[ID]}','USD{body[NAME]}')\")) .to(\"jdbc:testdb\");"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jdbc-component-starter |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_8.5_release_notes/rhdg-downloads_datagrid |
3.2. Load Balancer Using Direct Routing | 3.2. Load Balancer Using Direct Routing Direct routing allows real servers to process and route packets directly to a requesting user rather than passing outgoing packets through the LVS router. Direct routing requires that the real servers be physically connected to a network segment with the LVS router and be able to process and direct outgoing packets as well. Network Layout In a direct routing Load Balancer setup, the LVS router needs to receive incoming requests and route them to the proper real server for processing. The real servers then need to directly route the response to the client. So, for example, if the client is on the Internet, and sends the packet through the LVS router to a real server, the real server must be able to connect directly to the client through the Internet. This can be done by configuring a gateway for the real server to pass packets to the Internet. Each real server in the server pool can have its own separate gateway (and each gateway with its own connection to the Internet), allowing for maximum throughput and scalability. For typical Load Balancer setups, however, the real servers can communicate through one gateway (and therefore one network connection). Hardware The hardware requirements of a Load Balancer system using direct routing is similar to other Load Balancer topologies. While the LVS router needs to be running Red Hat Enterprise Linux to process the incoming requests and perform load-balancing for the real servers, the real servers do not need to be Linux machines to function correctly. The LVS routers need one or two NICs each (depending on if there is a backup router). You can use two NICs for ease of configuration and to distinctly separate traffic; incoming requests are handled by one NIC and routed packets to real servers on the other. Since the real servers bypass the LVS router and send outgoing packets directly to a client, a gateway to the Internet is required. For maximum performance and availability, each real server can be connected to its own separate gateway which has its own dedicated connection to the network to which the client is connected (such as the Internet or an intranet). Software There is some configuration outside of keepalived that needs to be done, especially for administrators facing ARP issues when using Load Balancer by means of direct routing. Refer to Section 3.2.1, "Direct Routing Using arptables" or Section 3.2.3, "Direct Routing Using iptables" for more information. 3.2.1. Direct Routing Using arptables In order to configure direct routing using arptables , each real server must have their virtual IP address configured, so they can directly route packets. ARP requests for the VIP are ignored entirely by the real servers, and any ARP packets that might otherwise be sent containing the VIPs are mangled to contain the real server's IP instead of the VIPs. Using the arptables method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables method allows multiple instances of Apache HTTP Server to be running and bound explicitly to different VIPs on the system. However, using the arptables method, VIPs cannot be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools. To configure each real server to ignore ARP requests for each virtual IP addresses, perform the following steps: Create the ARP table entries for each virtual IP address on each real server (the real_ip is the IP the director uses to communicate with the real server; often this is the IP bound to eth0 ): This will cause the real servers to ignore all ARP requests for the virtual IP addresses, and change any outgoing ARP responses which might otherwise contain the virtual IP so that they contain the real IP of the server instead. The only node that should respond to ARP requests for any of the VIPs is the current active LVS node. Once this has been completed on each real server, save the ARP table entries by typing the following commands on each real server: arptables-save > /etc/sysconfig/arptables systemctl enable arptables.service The systemctl enable command will cause the system to reload the arptables configuration on bootup before the network is started. Configure the virtual IP address on all real servers using ip addr to create an IP alias. For example: Configure Keepalived for Direct Routing. This can be done by adding lb_kind DR to the keepalived.conf file. Refer to Chapter 4, Initial Load Balancer Configuration with Keepalived for more information. 3.2.2. Direct Routing Using firewalld You may also work around the ARP issue using the direct routing method by creating firewall rules using firewalld . To configure direct routing using firewalld , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The firewalld method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address or addresses exist only on the active LVS director. However, there are performance issues using the firewalld method compared to arptables , as there is overhead in forwarding every return packet. You also cannot reuse ports using the firewalld method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the firewalld method, perform the following steps on every real server: Ensure that firewalld is running. Ensure that firewalld is enabled to start at system start. Enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server. This command will cause the real servers to process packets destined for the VIP and port that they are given. Reload the firewall rules and keep the state information. The current permanent configuration will become the new firewalld runtime configuration as well as the configuration at the system start. 3.2.3. Direct Routing Using iptables You may also work around the ARP issue using the direct routing method by creating iptables firewall rules. To configure direct routing using iptables , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The iptables method is simpler to configure than the arptables method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director. However, there are performance issues using the iptables method compared to arptables , as there is overhead in forwarding/masquerading every packet. You also cannot reuse ports using the iptables method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the iptables method, perform the following steps: On each real server, enter the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server: iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECT This command will cause the real servers to process packets destined for the VIP and port that they are given. Save the configuration on each real server: The systemctl enable command will cause the system to reload the iptables configuration on bootup before the network is started. 3.2.4. Direct Routing Using sysctl Another way to deal with the ARP limitation when employing Direct Routing is using the sysctl interface. Administrators can configure two systcl settings such that the real server will not announce the VIP in ARP requests and will not reply to ARP requests for the VIP address. To enable this, enter the following commands: Alternatively, you may add the following lines to the /etc/sysctl.d/arp.conf file: | [
"arptables -A IN -d <virtual_ip> -j DROP arptables -A OUT -s <virtual_ip> -j mangle --mangle-ip-s <real_ip>",
"ip addr add 192.168.76.24 dev eth0",
"systemctl start firewalld",
"systemctl enable firewalld",
"firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -d vip -p tcp|udp -m tcp|udp --dport port -j REDIRECT",
"firewall-cmd --reload",
"iptables-save > /etc/sysconfig/iptables systemctl enable iptables.service",
"echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce",
"net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.eth0.arp_announce = 2"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-direct-vsa |
Chapter 4. Managing namespace buckets | Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Data Foundation . Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify that the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. (Optional) Add description. Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resources. If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway Buckets Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in the earlier step that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification steps Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Data Foundation . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. | [
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"noobaa account create <noobaa-account-name> [flags]",
"noobaa account create testaccount --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore",
"NooBaaAccount spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>",
"noobaa account list NAME DEFAULT_RESOURCE PHASE AGE testaccount noobaa-default-backing-store Ready 1m17s",
"oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001",
"oc get ns <application_namespace> -o yaml | grep scc",
"oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000",
"oc project <application_namespace>",
"oc project testnamespace",
"oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s",
"oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s",
"oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}",
"oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'",
"oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]",
"oc exec -it <pod_name> -- df <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"oc get pv | grep <pv_name>",
"oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s",
"oc get pv <pv_name> -o yaml",
"oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound",
"cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF",
"oc create -f <YAML_file>",
"oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created",
"oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s",
"oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".",
"noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'",
"noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'",
"oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace",
"noobaa account create <user_account> --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'",
"noobaa account create leguser --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'",
"oc exec -it <pod_name> -- mkdir <mount_path> /nsfs",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs",
"noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'",
"noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'",
"oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>",
"oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..",
"oc exec -it <pod_name> -- ls -latrZ <mount_path>",
"oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..",
"noobaa bucket delete <bucket_name>",
"noobaa bucket delete legacy-bucket",
"noobaa account delete <user_account>",
"noobaa account delete leguser",
"noobaa namespacestore delete <nsfs_namespacestore>",
"noobaa namespacestore delete legacy-namespace",
"oc delete pv <cephfs_pv_name>",
"oc delete pvc <cephfs_pvc_name>",
"oc delete pv cephfs-pv-legacy-openshift-storage",
"oc delete pvc cephfs-pvc-legacy",
"oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"oc edit ns <appplication_namespace>",
"oc edit ns testnamespace",
"oc get ns <application_namespace> -o yaml | grep sa.scc.mcs",
"oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0",
"cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF",
"oc create -f scc.yaml",
"oc create serviceaccount <service_account_name>",
"oc create serviceaccount testnamespacesa",
"oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>",
"oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa",
"oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'",
"oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'",
"oc edit dc <pod_name> -n <application_namespace>",
"spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>",
"oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace",
"spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0",
"oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext",
"oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/managing-namespace-buckets_rhodf |
6.3. About Synchronized Attributes | 6.3. About Synchronized Attributes Identity Management synchronizes a subset of user attributes between IdM and Active Directory user entries. Any other attributes present in the entry, either in Identity Management or in Active Directory, are ignored by synchronization. Note Most POSIX attributes are not synchronized. Although there are significant schema differences between the Active Directory LDAP schema and the 389 Directory Server LDAP schema used by Identity Management, there are many attributes that are the same. These attributes are simply synchronized between the Active Directory and IdM user entries, with no changes to the attribute name or value format. User Schema That Are the Same in Identity Management and Windows Servers cn [2] physicalDeliveryOfficeName description postOfficeBox destinationIndicator postalAddress facsimileTelephoneNumber postalCode givenname registeredAddress homePhone sn homePostalAddress st initials street l telephoneNumber mail teletexTerminalIdentifier mobile telexNumber o title ou userCertificate pager x121Address Some attributes have different names but still have direct parity between IdM (which uses 389 Directory Server) and Active Directory. These attributes are mapped by the synchronization process. Table 6.2. User Schema Mapped between Identity Management and Active Directory Identity Management Active Directory cn [a] name nsAccountLock userAccountControl ntUserDomainId sAMAccountName ntUserHomeDir homeDirectory ntUserScriptPath scriptPath ntUserLastLogon lastLogon ntUserLastLogoff lastLogoff ntUserAcctExpires accountExpires ntUserCodePage codePage ntUserLogonHours logonHours ntUserMaxStorage maxStorage ntUserProfile profilePath ntUserParms userParameters ntUserWorkstations userWorkstations [a] The cn is mapped directly ( cn to cn ) when synchronizing from Identity Management to Active Directory. When synchronizing from Active Directory cn is mapped from the name attribute in Active Directory to the cn attribute in Identity Management. 6.3.1. User Schema Differences between Identity Management and Active Directory Even though attributes may be successfully synchronized between Active Directory and IdM, there may still be differences in how Active Directory and Identity Management define the underlying X.500 object classes. This could lead to differences in how the data are handled in the different LDAP services. This section describes the differences in how Active Directory and Identity Management handle some of the attributes which can be synchronized between the two domains. 6.3.1.1. Values for cn Attributes In 389 Directory Server, the cn attribute can be multi-valued, while in Active Directory this attribute must have only a single value. When the Identity Management cn attribute is synchronized, then, only one value is sent to the Active Directory peer. What this means for synchronization is that, potentially, if a cn value is added to an Active Directory entry and that value is not one of the values for cn in Identity Management, then all of the Identity Management cn values are overwritten with the single Active Directory value. One other important difference is that Active Directory uses the cn attribute as its naming attribute, where Identity Management uses uid . This means that there is the potential to rename the entry entirely (and accidentally) if the cn attribute is edited in the Identity Management. 6.3.1.2. Values for street and streetAddress Active Directory uses the attribute streetAddress for a user's postal address; this is the way that 389 Directory Server uses the street attribute. There are two important differences in the way that Active Directory and Identity Management use the streetAddress and street attributes, respectively: In 389 Directory Server, streetAddress is an alias for street . Active Directory also has the street attribute, but it is a separate attribute that can hold an independent value, not an alias for streetAddress . Active Directory defines both streetAddress and street as single-valued attributes, while 389 Directory Server defines street as a multi-valued attribute, as specified in RFC 4519. Because of the different ways that 389 Directory Server and Active Directory handle streetAddress and street attributes, there are two rules to follow when setting address attributes in Active Directory and Identity Management: The synchronization process maps streetAddress in the Active Directory entry to street in Identity Management. To avoid conflicts, the street attribute should not be used in Active Directory. Only one Identity Management street attribute value is synchronized to Active Directory. If the streetAddress attribute is changed in Active Directory and the new value does not already exist in Identity Management, then all street attribute values in Identity Management are replaced with the new, single Active Directory value. 6.3.1.3. Constraints on the initials Attribute For the initials attribute, Active Directory imposes a maximum length constraint of six characters, but 389 Directory Server does not have a length limit. If an initials attribute longer than six characters is added to Identity Management, the value is trimmed when it is synchronized with the Active Directory entry. 6.3.1.4. Requiring the surname (sn) Attribute Active Directory allows person entries to be created without a surname attribute. However, RFC 4519 defines the person object class as requiring a surname attribute, and this is the definition used in Directory Server. If an Active Directory person entry is created without a surname attribute, that entry will not be synchronized over to IdM since it fails with an object class violation. 6.3.2. Active Directory Entries and POSIX Attributes When a Windows user account contains values for the uidNumber and gidNumber attributes, WinSync does not synchronize these values over to Identity Management. Instead, it creates new UID and GID values in Identity Management. As a result, the values for uidNumber and gidNumber are different in Active Directory and in Identity Management. [2] The cn is treated differently than other synchronized attributes. It is mapped directly ( cn to cn ) when synchronizing from Identity Management to Active Directory. When synchronizing from Active Directory to Identity Management, however, cn is mapped from the name attribute on Windows to the cn attribute in Identity Management. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/about-sync-schema |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.