title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
10.3. IBM Installation Tools
|
10.3. IBM Installation Tools IBM Installation Toolkit is an optional utility that speeds up the installation of Linux on IBM Power Systems and is especially helpful for those unfamiliar with Linux. You can use the IBM Installation Toolkit to: [1] Install and configure Linux on a non-virtualized IBM Power Systems server. Install and configure Linux on servers with previously-configured logical partitions (LPARs, also known as virtualized servers). Install IBM service and productivity tools on a new or previously installed Linux system. The IBM service and productivity tools include dynamic logical partition (DLPAR) utilities. Upgrade system firmware level on IBM Power Systems servers. Perform diagnostics or maintenance operations on previously installed systems. Migrate a LAMP server (software stack) and application data from a System x to a System p system. A LAMP server is a bundle of open source software. LAMP is an acronym for Linux, Apache HTTP Server , MySQL relational database, and the PHP (or sometimes Perl, or Python) language. Documentation for the IBM Installation Toolkit for PowerLinux is available in the Linux Information Center at https://www.ibm.com/support/pages/ibm-installation-toolkit-powerlinux-version-52-now-available PowerLinux service and productivity tools is an optional set of tools that include hardware service diagnostic aids, productivity tools, and installation aids for Linux operating systems on IBM servers based on POWER7, POWER6, POWER5, and POWER4 technology. [1] Parts of this section were previously published at IBM's Linux information for IBM systems resource.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-ibm-tools-ppc
|
Chapter 3. Red Hat build of Keycloak JavaScript adapter
|
Chapter 3. Red Hat build of Keycloak JavaScript adapter Red Hat build of Keycloak comes with a client-side JavaScript library called keycloak-js that can be used to secure web applications. The adapter also comes with built-in support for Cordova applications. The adapter uses OpenID Connect protocol under the covers. You can take a look at the Secure applications and services with OpenID Connect chapter for the more generic information about OpenID Connect endpoints and capabilities. 3.1. Installation We recommend that you install the keycloak-js package from NPM: npm install keycloak-js 3.2. Red Hat build of Keycloak server configuration One important thing to consider about using client-side applications is that the client has to be a public client as there is no secure way to store client credentials in a client-side application. This consideration makes it very important to make sure the redirect URIs you have configured for the client are correct and as specific as possible. To use the adapter, create a client for your application in the Red Hat build of Keycloak Admin Console. Make the client public by toggling Client authentication to Off on the Capability config page. You also need to configure Valid Redirect URIs and Web Origins . Be as specific as possible as failing to do so may result in a security vulnerability. 3.3. Using the adapter The following example shows how to initialize the adapter. Make sure that you replace the options passed to the Keycloak constructor with those of the client you have configured. import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); try { const authenticated = await keycloak.init(); if (authenticated) { console.log('User is authenticated'); } else { console.log('User is not authenticated'); } } catch (error) { console.error('Failed to initialize adapter:', error); } To authenticate, you call the login function. Two options exist to make the adapter automatically authenticate. You can pass login-required or check-sso to the init() function. login-required authenticates the client if the user is logged in to Red Hat build of Keycloak or displays the login page if the user is not logged in. check-sso only authenticates the client if the user is already logged in. If the user is not logged in, the browser is redirected back to the application and remains unauthenticated. You can configure a silent check-sso option. With this feature enabled, your browser will not perform a full redirect to the Red Hat build of Keycloak server and back to your application, but this action will be performed in a hidden iframe. Therefore, your application resources are only loaded and parsed once by the browser, namely when the application is initialized and not again after the redirect back from Red Hat build of Keycloak to your application. This approach is particularly useful in case of SPAs (Single Page Applications). To enable the silent check-sso , you provide a silentCheckSsoRedirectUri attribute in the init method. Make sure this URI is a valid endpoint in the application; it must be configured as a valid redirect for the client in the Red Hat build of Keycloak Admin Console: await keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` }); The page at the silent check-sso redirect uri is loaded in the iframe after successfully checking your authentication state and retrieving the tokens from the Red Hat build of Keycloak server. It has no other task than sending the received tokens to the main application and should only look like this: <!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html> Remember that this page must be served by your application at the specified location in silentCheckSsoRedirectUri and is not part of the adapter. Warning Silent check-sso functionality is limited in some modern browsers. Please see the Modern Browsers with Tracking Protection Section . To enable login-required set onLoad to login-required and pass to the init method: await keycloak.init({ onLoad: 'login-required' }); After the user is authenticated the application can make requests to RESTful services secured by Red Hat build of Keycloak by including the bearer token in the Authorization header. For example: async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); } One thing to keep in mind is that the access token by default has a short life expiration so you may need to refresh the access token prior to sending the request. You refresh this token by calling the updateToken() method. This method returns a Promise, which makes it easy to invoke the service only if the token was successfully refreshed and displays an error to the user if it was not refreshed. For example: try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers(); Note Both access and refresh token are stored in memory and are not persisted in any kind of storage. Therefore, these tokens should never be persisted to prevent hijacking attacks. 3.4. Session Status iframe By default, the adapter creates a hidden iframe that is used to detect if a Single-Sign Out has occurred. This iframe does not require any network traffic. Instead the status is retrieved by looking at a special status cookie. This feature can be disabled by setting checkLoginIframe: false in the options passed to the init() method. You should not rely on looking at this cookie directly. Its format can change and it's also associated with the URL of the Red Hat build of Keycloak server, not your application. Warning Session Status iframe functionality is limited in some modern browsers. Please see Modern Browsers with Tracking Protection Section . 3.5. Implicit and hybrid flow By default, the adapter uses the Authorization Code flow. With this flow, the Red Hat build of Keycloak server returns an authorization code, not an authentication token, to the application. The JavaScript adapter exchanges the code for an access token and a refresh token after the browser is redirected back to the application. Red Hat build of Keycloak also supports the Implicit flow where an access token is sent immediately after successful authentication with Red Hat build of Keycloak. This flow may have better performance than the standard flow because no additional request exists to exchange the code for tokens, but it has implications when the access token expires. However, sending the access token in the URL fragment can be a security vulnerability. For example the token could be leaked through web server logs and or browser history. To enable implicit flow, you enable the Implicit Flow Enabled flag for the client in the Red Hat build of Keycloak Admin Console. You also pass the parameter flow with the value implicit to init method: await keycloak.init({ flow: 'implicit' }) Note that only an access token is provided and no refresh token exists. This situation means that once the access token has expired, the application has to redirect to Red Hat build of Keycloak again to obtain a new access token. Red Hat build of Keycloak also supports the Hybrid flow. This flow requires the client to have both the Standard Flow and Implicit Flow enabled in the Admin Console. The Red Hat build of Keycloak server then sends both the code and tokens to your application. The access token can be used immediately while the code can be exchanged for access and refresh tokens. Similar to the implicit flow, the hybrid flow is good for performance because the access token is available immediately. But, the token is still sent in the URL, and the security vulnerability mentioned earlier may still apply. One advantage in the Hybrid flow is that the refresh token is made available to the application. For the Hybrid flow, you need to pass the parameter flow with value hybrid to the init method: await keycloak.init({ flow: 'hybrid' }); 3.6. Hybrid Apps with Cordova Red Hat build of Keycloak supports hybrid mobile apps developed with Apache Cordova . The adapter has two modes for this: cordova and cordova-native : The default is cordova , which the adapter automatically selects if no adapter type has been explicitly configured and window.cordova is present. When logging in, it opens an InApp Browser that lets the user interact with Red Hat build of Keycloak and afterwards returns to the app by redirecting to http://localhost . Because of this behavior, you whitelist this URL as a valid redirect-uri in the client configuration section of the Admin Console. While this mode is easy to set up, it also has some disadvantages: The InApp-Browser is a browser embedded in the app and is not the phone's default browser. Therefore it will have different settings and stored credentials will not be available. The InApp-Browser might also be slower, especially when rendering more complex themes. There are security concerns to consider, before using this mode, such as that it is possible for the app to gain access to the credentials of the user, as it has full control of the browser rendering the login page, so do not allow its use in apps you do not trust. The alternative mode is`cordova-native`, which takes a different approach. It opens the login page using the system's browser. After the user has authenticated, the browser redirects back into the application using a special URL. From there, the Red Hat build of Keycloak adapter can finish the login by reading the code or token from the URL. You can activate the native mode by passing the adapter type cordova-native to the init() method: await keycloak.init({ adapter: 'cordova-native' }); This adapter requires two additional plugins: cordova-plugin-browsertab : allows the app to open webpages in the system's browser cordova-plugin-deeplinks : allow the browser to redirect back to your app by special URLs The technical details for linking to an app differ on each platform and special setup is needed. Please refer to the Android and iOS sections of the deeplinks plugin documentation for further instructions. Different kinds of links exist for opening apps: custom schemes, such as myapp://login or android-app://com.example.myapp/https/example.com/login . Universal Links (iOS) / Deep Links (Android) . While the former are easier to set up and tend to work more reliably, the latter offer extra security because they are unique and only the owner of a domain can register them. Custom-URLs are deprecated on iOS. For best reliability, we recommend that you use universal links combined with a fallback site that uses a custom-url link. Furthermore, we recommend the following steps to improve compatibility with the adapter: Universal Links on iOS seem to work more reliably with response-mode set to query To prevent Android from opening a new instance of your app on redirect add the following snippet to config.xml : <preference name="AndroidLaunchMode" value="singleTask" /> 3.7. Custom Adapters In some situations, you may need to run the adapter in environments that are not supported by default, such as Capacitor. To use the JavasScript client in these environments, you can pass a custom adapter. For example, a third-party library could provide such an adapter to make it possible to reliably run the adapter: import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); await keycloak.init({ adapter: KeycloakCapacitorAdapter, }); This specific package does not exist, but it gives a pretty good example of how such an adapter could be passed into the client. It's also possible to make your own adapter, to do so you will have to implement the methods described in the KeycloakAdapter interface. For example the following TypeScript code ensures that all the methods are properly implemented: import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { async login(options) { // Write your own implementation here. } // The other methods go here... }; const keycloak = new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); await keycloak.init({ adapter: MyCustomAdapter, }); Naturally you can also do this without TypeScript by omitting the type information, but ensuring implementing the interface properly will then be left entirely up to you. 3.8. Modern Browsers with Tracking Protection In the latest versions of some browsers, various cookies policies are applied to prevent tracking of the users by third parties, such as SameSite in Chrome or completely blocked third-party cookies. Those policies are likely to become more restrictive and adopted by other browsers over time. Eventually cookies in third-party contexts may become completely unsupported and blocked by the browsers. As a result, the affected adapter features might ultimately be deprecated. The adapter relies on third-party cookies for Session Status iframe, silent check-sso and partially also for regular (non-silent) check-sso . Those features have limited functionality or are completely disabled based on how restrictive the browser is regarding cookies. The adapter tries to detect this setting and reacts accordingly. 3.8.1. Browsers with "SameSite=Lax by Default" Policy All features are supported if SSL / TLS connection is configured on the Red Hat build of Keycloak side as well as on the application side. For example, Chrome is affected starting with version 84. 3.8.2. Browsers with Blocked Third-Party Cookies Session Status iframe is not supported and is automatically disabled if such browser behavior is detected by the adapter. This means the adapter cannot use a session cookie for Single Sign-Out detection and must rely purely on tokens. As a result, when a user logs out in another window, the application using the adapter will not be logged out until the application tries to refresh the Access Token. Therefore, consider setting the Access Token Lifespan to a relatively short time, so that the logout is detected as soon as possible. For more details, see Session and Token Timeouts . Silent check-sso is not supported and falls back to regular (non-silent) check-sso by default. This behavior can be changed by setting silentCheckSsoFallback: false in the options passed to the init method. In this case, check-sso will be completely disabled if restrictive browser behavior is detected. Regular check-sso is affected as well. Since Session Status iframe is unsupported, an additional redirect to Red Hat build of Keycloak has to be made when the adapter is initialized to check the user's login status. This check is different from the standard behavior when the iframe is used to tell whether the user is logged in, and the redirect is performed only when the user is logged out. An affected browser is for example Safari starting with version 13.1. 3.9. API Reference 3.9.1. Constructor // Recommended way to initialize the adapter. new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); // Alternatively a string to the path of the `keycloak.json` file. // Has some performance implications, as it will load the keycloak.json file from the server. // This version might also change in the future and is therefore not recommended. new Keycloak("http://keycloak-server/keycloak.json"); 3.9.2. Properties authenticated Is true if the user is authenticated, false otherwise. token The base64 encoded token that can be sent in the Authorization header in requests to services. tokenParsed The parsed token as a JavaScript object. subject The user id. idToken The base64 encoded ID token. idTokenParsed The parsed id token as a JavaScript object. realmAccess The realm roles associated with the token. resourceAccess The resource roles associated with the token. refreshToken The base64 encoded refresh token that can be used to retrieve a new token. refreshTokenParsed The parsed refresh token as a JavaScript object. timeSkew The estimated time difference between the browser time and the Red Hat build of Keycloak server in seconds. This value is just an estimation, but is accurate enough when determining if a token is expired or not. responseMode Response mode passed in init (default value is fragment). flow Flow passed in init. adapter Allows you to override the way that redirects and other browser-related functions will be handled by the library. Available options: "default" - the library uses the browser api for redirects (this is the default) "cordova" - the library will try to use the InAppBrowser cordova plugin to load keycloak login/registration pages (this is used automatically when the library is working in a cordova ecosystem) "cordova-native" - the library tries to open the login and registration page using the phone's system browser using the BrowserTabs cordova plugin. This requires extra setup for redirecting back to the app (see Section 3.6, "Hybrid Apps with Cordova" ). "custom" - allows you to implement a custom adapter (only for advanced use cases) responseType Response type sent to Red Hat build of Keycloak with login requests. This is determined based on the flow value used during initialization, but can be overridden by setting this value. 3.9.3. Methods init(options) Called to initialize the adapter. Options is an Object, where: useNonce - Adds a cryptographic nonce to verify that the authentication response matches the request (default is true ). onLoad - Specifies an action to do on load. Supported values are login-required or check-sso . silentCheckSsoRedirectUri - Set the redirect uri for silent authentication check if onLoad is set to 'check-sso'. silentCheckSsoFallback - Enables fall back to regular check-sso when silent check-sso is not supported by the browser (default is true ). token - Set an initial value for the token. refreshToken - Set an initial value for the refresh token. idToken - Set an initial value for the id token (only together with token or refreshToken). scope - Set the default scope parameter to the Red Hat build of Keycloak login endpoint. Use a space-delimited list of scopes. Those typically reference Client scopes defined on a particular client. Note that the scope openid will always be added to the list of scopes by the adapter. For example, if you enter the scope options address phone , then the request to Red Hat build of Keycloak will contain the scope parameter scope=openid address phone . Note that the default scope specified here is overwritten if the login() options specify scope explicitly. timeSkew - Set an initial value for skew between local time and Red Hat build of Keycloak server in seconds (only together with token or refreshToken). checkLoginIframe - Set to enable/disable monitoring login state (default is true ). checkLoginIframeInterval - Set the interval to check login state (default is 5 seconds). responseMode - Set the OpenID Connect response mode send to Red Hat build of Keycloak server at login request. Valid values are query or fragment . Default value is fragment , which means that after successful authentication will Red Hat build of Keycloak redirect to JavaScript application with OpenID Connect parameters added in URL fragment. This is generally safer and recommended over query . flow - Set the OpenID Connect flow. Valid values are standard , implicit or hybrid . enableLogging - Enables logging messages from Keycloak to the console (default is false ). pkceMethod - The method for Proof Key Code Exchange ( PKCE ) to use. Configuring this value enables the PKCE mechanism. Available options: "S256" - The SHA256 based PKCE method (default) false - PKCE is disabled. acrValues - Generates the acr_values parameter which refers to authentication context class reference and allows clients to declare the required assurance level requirements, e.g. authentication mechanisms. See Section 4. acr_values request values and level of assurance in OpenID Connect MODRNA Authentication Profile 1.0 . messageReceiveTimeout - Set a timeout in milliseconds for waiting for message responses from the Keycloak server. This is used, for example, when waiting for a message during 3rd party cookies check. The default value is 10000. locale - When onLoad is 'login-required', sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . Returns a promise that resolves when initialization completes. login(options) Redirects to login form, returns a Promise. Options is an optional Object, where: redirectUri - Specifies the uri to redirect to after login. prompt - This parameter allows to slightly customize the login flow on the Red Hat build of Keycloak server side. For example, enforce displaying the login screen in case of value login . Or enforce displaying of consent screen for the value consent in case that client has Consent Required . Finally it is possible use the value none to make sure that login screen is not displayed to the user, which is useful just to check SSO for the case when user was already authenticated before (This is related to the onLoad check with value check-sso described above). maxAge - Used just if user is already authenticated. Specifies maximum time since the authentication of user happened. If user is already authenticated for longer time than maxAge , the SSO is ignored and he will need to re-authenticate again. loginHint - Used to pre-fill the username/email field on the login form. scope - Override the scope configured in init with a different value for this specific login. idpHint - Used to tell Red Hat build of Keycloak to skip showing the login page and automatically redirect to the specified identity provider instead. More info in the Identity Provider documentation . acr - Contains the information about acr claim, which will be sent inside claims parameter to the Red Hat build of Keycloak server. Typical usage is for step-up authentication. Example of use { values: ["silver", "gold"], essential: true } . See OpenID Connect specification and Step-up authentication documentation for more details. acrValues - Generates the acr_values parameter which refers to authentication context class reference and allows clients to declare the required assurance level requirements, e.g. authentication mechanisms. See Section 4. acr_values request values and level of assurance in OpenID Connect MODRNA Authentication Profile 1.0 . action - If the value is register , the user is redirected to the registration page. See Registration requested by client section for more details. If the value is UPDATE_PASSWORD or another supported required action, the user will be redirected to the reset password page or the other required action page. However, if the user is not authenticated, the user will be sent to the login page and redirected after authentication. See Application Initiated Action section for more details. locale - Sets the 'ui_locales' query param in compliance with section 3.1.2.1 of the OIDC 1.0 specification . cordovaOptions - Specifies the arguments that are passed to the Cordova in-app-browser (if applicable). Options hidden and location are not affected by these arguments. All available options are defined at https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-inappbrowser/ . Example of use: { zoom: "no", hardwareback: "yes" } ; createLoginUrl(options) Returns a Promise containing the URL to login form. Options is an optional Object, which supports same options as the function login . logout(options) Redirects to logout. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. createLogoutUrl(options) Returns the URL to log out the user. Options is an Object, where: redirectUri - Specifies the uri to redirect to after logout. register(options) Redirects to registration form. Shortcut for login with option action = 'register' Options are same as for the login method but 'action' is set to 'register' createRegisterUrl(options) Returns a Promise containing the url to registration page. Shortcut for createLoginUrl with option action = 'register' Options are same as for the createLoginUrl method but 'action' is set to 'register' accountManagement() Redirects to the Account Console. createAccountUrl(options) Returns the URL to the Account Console. Options is an Object, where: redirectUri - Specifies the uri to redirect to when redirecting back to the application. hasRealmRole(role) Returns true if the token has the given realm role. hasResourceRole(role, resource) Returns true if the token has the given role for the resource (resource is optional, if not specified clientId is used). loadUserProfile() Loads the users profile. Returns a promise that resolves with the profile. For example: try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); } isTokenExpired(minValidity) Returns true if the token has less than minValidity seconds left before it expires (minValidity is optional, if not specified 0 is used). updateToken(minValidity) If the token expires within minValidity seconds (minValidity is optional, if not specified 5 is used) the token is refreshed. If -1 is passed as the minValidity, the token will be forcibly refreshed. If the session status iframe is enabled, the session status is also checked. Returns a promise that resolves with a boolean indicating whether or not the token has been refreshed. For example: try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); } clearToken() Clear authentication state, including tokens. This can be useful if application has detected the session was expired, for example if updating token fails. Invoking this results in onAuthLogout callback listener being invoked. 3.9.4. Callback Events The adapter supports setting callback listeners for certain events. Keep in mind that these have to be set before the call to the init() method. For example: keycloak.onAuthSuccess = () => console.log('Authenticated!'); The available events are: onReady(authenticated) - Called when the adapter is initialized. onAuthSuccess - Called when a user is successfully authenticated. onAuthError - Called if there was an error during authentication. onAuthRefreshSuccess - Called when the token is refreshed. onAuthRefreshError - Called if there was an error while trying to refresh the token. onAuthLogout - Called if the user is logged out (will only be called if the session status iframe is enabled, or in Cordova mode). onTokenExpired - Called when the access token is expired. If a refresh token is available the token can be refreshed with updateToken, or in cases where it is not (that is, with implicit flow) you can redirect to the login screen to obtain a new access token.
|
[
"npm install keycloak-js",
"import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); try { const authenticated = await keycloak.init(); if (authenticated) { console.log('User is authenticated'); } else { console.log('User is not authenticated'); } } catch (error) { console.error('Failed to initialize adapter:', error); }",
"await keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });",
"<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>",
"await keycloak.init({ onLoad: 'login-required' });",
"async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }",
"try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();",
"await keycloak.init({ flow: 'implicit' })",
"await keycloak.init({ flow: 'hybrid' });",
"await keycloak.init({ adapter: 'cordova-native' });",
"<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />",
"import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: KeycloakCapacitorAdapter, });",
"import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { async login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: MyCustomAdapter, });",
"// Recommended way to initialize the adapter. new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); // Alternatively a string to the path of the `keycloak.json` file. // Has some performance implications, as it will load the keycloak.json file from the server. // This version might also change in the future and is therefore not recommended. new Keycloak(\"http://keycloak-server/keycloak.json\");",
"try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }",
"try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }",
"keycloak.onAuthSuccess = () => console.log('Authenticated!');"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/javascript-adapter-
|
Chapter 2. access
|
Chapter 2. access This chapter describes the commands under the access command. 2.1. access token create Create an access token Usage: Table 2.1. Command arguments Value Summary -h, --help Show this help message and exit --consumer-key <consumer-key> Consumer key (required) --consumer-secret <consumer-secret> Consumer secret (required) --request-key <request-key> Request token to exchange for access token (required) --request-secret <request-secret> Secret associated with <request-key> (required) --verifier <verifier> Verifier associated with <request-key> (required) Table 2.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 2.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack access token create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --consumer-key <consumer-key> --consumer-secret <consumer-secret> --request-key <request-key> --request-secret <request-secret> --verifier <verifier>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/access
|
7.4. Using Replication with Other Directory Server Features
|
7.4. Using Replication with Other Directory Server Features Replication interacts with other Directory Server features to provide advanced replication features. The following sections describe feature interactions to better design the replication strategy. 7.4.1. Replication and Access Control The directory service stores ACIs as attributes of entries. This means that the ACI is replicated together with other directory content. This is important because Directory Server evaluates ACIs locally. For more information about designing access control for the directory, see Chapter 9, Designing a Secure Directory . 7.4.2. Replication and Directory Server Plug-ins Replication works with most of the plug-ins delivered with Directory Server. There are some exceptions and limitations in the case of multi-supplier replication with the following plug-ins: Attribute Uniqueness Plug-in The Attribute Uniqueness Plug-in validate attribute values added to local entries to make sure that all values are unique. However, this checking is done directly on the server, not replicated from other suppliers. For example, Example Corp. requires that the mail attribute be unique, but two users are added with the same mail attribute to two different supplier servers at the same time. As long as there it no a naming conflict, then there is no replication conflict, but the mail attribute is not unique. Referential Integrity Plug-in Referential integrity works with multi-supplier replication, provided that this plug-in is enabled on only one supplier in the multi-supplier set. This ensures that referential integrity updates occur on only one of the supplier servers and propagated to the others. Note By default, these plug-ins are disabled, and they must be manually enabled. 7.4.3. Replication and Database Links With chaining to distribute directory entries, the server containing the database link references a remote server that contains the actual data. In this environment, the database link itself cannot be replicated. However, the database that contains the actual data on the remote server can be replicated. Do not use the replication process as a backup for database links. Database links must be backed up manually. For more information about chaining and entry distribution, see Chapter 6, Designing the Directory Topology . Figure 7.10. Replicating Chained Databases 7.4.4. Schema Replication For the standard schema, before replicating data to consumer servers, the supplier server checks whether its own version of the schema is synchronized with the version of the schema stored on the consumer servers. The following conditions apply: If the schema entries on both supplier and consumers are the same, the replication operation proceeds. If the version of the schema on the supplier server is more recent than the version stored on the consumer, the supplier server replicates its schema to the consumer before proceeding with the data replication. If the version of the schema on the supplier server is older than the version stored on the consumer, the server may return many errors during replication because the schema on the consumer cannot support the new data. Note Schema replication still occurs, even if the schemas between the supplier and replica do not match. Replicatable changes include changes to the schema made through the web console, changes made through ldapmodify , and changes made directly to the 99user.ldif file. Custom schema files, and any changes made to custom schema files, are not replicated. A consumer might contain replicated data from two suppliers, each with different schema. Whichever supplier was updated last wins, and its schema is propagated to the consumer. Warning Never update the schema on a consumer server, because the supplier server is unable to resolve conflicts that occur, and replication fails. Schema should be maintained on a supplier server in a replicated topology. The same Directory Server can hold read-write replicas for which it acts as a supplier and read-only replicas for which it acts as a consumer. Therefore, always identify the server that will function as a supplier for the schema, and then set up replication agreements between this supplier and all other servers in the replication environment that will function as consumers for the schema information. Special replication agreements are not required to replicate the schema. If replication has been configured between a supplier and a consumer, schema replication occurs by default. For more information on schema design, see Chapter 3, Designing the Directory Schema . Custom Schema If the standard 99user.ldif file is used for custom schema, these changes are replicated to all consumers. Custom schema files must be copied to each server in order to maintain the information in the same schema file on all servers. Custom schema files, and changes to those files, are not replicated, even if they are made through the web console or ldapmodify . If there are custom schema files, ensure that these files are copied to all servers after making changes on the supplier. After all of the files have been copied, restart the server. For more information on custom schema files, see Section 3.4.7, "Creating Custom Schema Files" . 7.4.5. Replication and Synchronization In order to propagate synchronized Windows entries throughout the Directory Server, use synchronization within a multi-supplier environment. Synchronization agreement should be kept to the lowest amount possible, preferably one per deployment. Multi-supplier replication allows the Windows information to be available throughout the network, while limiting the data access point to a single Directory Server.
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Replication_Process-Using_Replication_with_Other_DS_Features
|
function::d_path
|
function::d_path Name function::d_path - get the full nameidata path Synopsis Arguments nd Pointer to nameidata. Description Returns the full dirent name (full path to the root), like the kernel d_path function.
|
[
"d_path:string(nd:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-d-path
|
Red Hat Data Grid
|
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/red-hat-data-grid
|
Chapter 2. Threat and Vulnerability Management
|
Chapter 2. Threat and Vulnerability Management Red Hat Ceph Storage is typically deployed in conjunction with cloud computing solutions, so it can be helpful to think about a Red Hat Ceph Storage deployment abstractly as one of many series of components in a larger deployment. These deployments typically have shared security concerns, which this guide refers to as Security Zones . Threat actors and vectors are classified based on their motivation and access to resources. The intention is to provide you with a sense of the security concerns for each zone, depending on your objectives. 2.1. Threat Actors A threat actor is an abstract way to refer to a class of adversary that you might attempt to defend against. The more capable the actor, the more rigorous the security controls that are required for successful attack mitigation and prevention. Security is a matter of balancing convenience, defense, and cost, based on requirements. In some cases, it's impossible to secure a Red Hat Ceph Storage deployment against all threat actors described here. When deploying Red Hat Ceph Storage, you must decide where the balance lies for your deployment and usage. As part of your risk assessment, you must also consider the type of data you store and any accessible resources, as this will also influence certain actors. However, even if your data is not appealing to threat actors, they could simply be attracted to your computing resources. Nation-State Actors: This is the most capable adversary. Nation-state actors can bring tremendous resources against a target. They have capabilities beyond that of any other actor. It's difficult to defend against these actors without stringent controls in place, both human and technical. Serious Organized Crime: This class describes highly capable and financially driven groups of attackers. They are able to fund in-house exploit development and target research. In recent years, the rise of organizations such as the Russian Business Network, a massive cyber-criminal enterprise, has demonstrated how cyber attacks have become a commodity. Industrial espionage falls within the serious organized crime group. Highly Capable Groups: This refers to 'Hacktivist' type organizations who are not typically commercially funded, but can pose a serious threat to service providers and cloud operators. Motivated Individuals Acting Alone: These attackers come in many guises, such as rogue or malicious employees, disaffected customers, or small-scale industrial espionage. Script Kiddies: These attackers don't target a specific organization, but run automated vulnerability scanning and exploitation. They are often a nuisance; however, compromise by one of these actors is a major risk to an organization's reputation. The following practices can help mitigate some of the risks identified above: Security Updates: You must consider the end-to-end security posture of your underlying physical infrastructure, including networking, storage, and server hardware. These systems will require their own security hardening practices. For your Red Hat Ceph Storage deployment, you should have a plan to regularly test and deploy security updates. Product Updates: Red Hat recommends running product updates as they become available. Updates are typically released every six weeks (and occasionally more frequently). Red Hat endeavors to make point releases and z-stream releases fully compatible within a major release in order to not require additional integration testing. Access Management: Access management includes authentication, authorization, and accounting. Authentication is the process of verifying the user's identity. Authorization is the process of granting permissions to an authenticated user. Accounting is the process of tracking which user performed an action. When granting system access to users, apply the principle of least privilege , and only grant users the granular system privileges they actually need. This approach can also help mitigate the risks of both malicious actors and typographical errors from system administrators. Manage Insiders: You can help mitigate the threat of malicious insiders by applying careful assignment of role-based access control (minimum required access), using encryption on internal interfaces, and using authentication/authorization security (such as centralized identity management). You can also consider additional non-technical options, such as separation of duties and irregular job role rotation. 2.2. Security Zones A security zone comprises users, applications, servers, or networks that share common trust requirements and expectations within a system. Typically they share the same authentication, authorization requirements, and users. Although you may refine these zone definitions further, this guide refers to four distinct security zones, three of which form the bare minimum that is required to deploy a security-hardened Red Hat Ceph Storage cluster. These security zones are listed below from least to most trusted: Public Security Zone: The public security zone is an entirely untrusted area of the cloud infrastructure. It can refer to the Internet as a whole or simply to networks that are external to your Red Hat OpenStack deployment over which you have no authority. Any data with confidentiality or integrity requirements that traverse this zone should be protected using compensating controls such as encryption. The public security zone SHOULD NOT be confused with the Ceph Storage Cluster's front- or client-side network, which is referred to as the public_network in RHCS and is usually NOT part of the public security zone or the Ceph client security zone. Ceph Client Security Zone: With RHCS, the Ceph client security zone refers to networks accessing Ceph clients such as Ceph Object Gateway, Ceph Block Device, Ceph Filesystem, or librados . The Ceph client security zone is typically behind a firewall separating itself from the public security zone. However, Ceph clients are not always protected from the public security zone. It is possible to expose the Ceph Object Gateway's S3 and Swift APIs in the public security zone. Storage Access Security Zone: The storage access security zone refers to internal networks providing Ceph clients with access to the Ceph Storage Cluster. We use the phrase 'storage access security zone' so that this document is consistent with the terminology used in the OpenStack Platform Security and Hardening Guide. The storage access security zone includes the Ceph Storage Cluster's front- or client-side network, which is referred to as the public_network in RHCS. Ceph Cluster Security Zone: The Ceph cluster security zone refers to the internal networks providing the Ceph Storage Cluster's OSD daemons with network communications for replication, heartbeating, backfilling, and recovery. The Ceph cluster security zone includes the Ceph Storage Cluster's backside network, which is referred to as the cluster_network in RHCS. These security zones can be mapped separately, or combined to represent the majority of the possible areas of trust within a given RHCS deployment. Security zones should be mapped out against your specific RHCS deployment topology. The zones and their trust requirements will vary depending upon whether Red Hat Ceph Storage is operating in a standalone capacity or is serving a public, private, or hybrid cloud. For a visual representation of these security zones, see Security Optimized Architecture . Additional Resources See the Network Communications section in the Red Hat Ceph Storage Data Security and Hardening Guide for more details. 2.3. Connecting Security Zones Any component that spans across multiple security zones with different trust levels or authentication requirements must be carefully configured. These connections are often the weak points in network architecture, and should always be configured to meet the security requirements of the highest trust level of any of the zones being connected. In many cases, the security controls of the connected zones should be a primary concern due to the likelihood of attack. The points where zones meet do present an opportunity for attackers to migrate or target their attack to more sensitive parts of the deployment. In some cases, Red Hat Ceph Storage administrators might want to consider securing integration points at a higher standard than any of the zones in which the integration point resides. For example, the Ceph Cluster Security Zone can be isolated from other security zones easily, because there is no reason for it to connect to other security zones. By contrast, the Storage Access Security Zone must provide access to port 6789 on Ceph monitor nodes, and ports 6800-7300 on Ceph OSD nodes. However, port 3000 should be exclusive to the Storage Access Security Zone, because it provides access to Ceph Grafana monitoring information that should be exposed to Ceph administrators only. A Ceph Object Gateway in the Ceph Client Security Zone will need to access the Ceph Cluster Security Zone's monitors (port 6789 ) and OSDs (ports 6800-7300 ), and may expose its S3 and Swift APIs to the Public Security Zone such as over HTTP port 80 or HTTPS port 443 ; yet, it may still need to restrict access to the admin API. The design of Red Hat Ceph Storage is such that the separation of security zones is difficult. As core services usually span at least two zones, special consideration must be given when applying security controls to them. 2.4. Security-Optimized Architecture A Red Hat Ceph Storage cluster's daemons typically run on nodes that are subnet isolated and behind a firewall, which makes it relatively simple to secure an RHCS cluster. By contrast, Red Hat Ceph Storage clients such as Ceph Block Device ( rbd ), Ceph Filesystem ( cephfs ), and Ceph Object Gateway ( rgw ) access the RHCS storage cluster, but expose their services to other cloud computing platforms.
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/data_security_and_hardening_guide/assembly-threat-and-vulnerability-management
|
Chapter 4. Skupper Camel Integration Example
|
Chapter 4. Skupper Camel Integration Example Twitter, Telegram and PostgreSQL integration routes deployed across Kubernetes clusters using Skupper This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview In this example we can see how to integrate different Camel integration routers that can be deployed across multiple Kubernetes clusters using Skupper. The main idea of this project is to show a Camel integration deployed in a public cluster which searches tweets that contain the word 'skupper'. Those results are sent to a private cluster that has a database deployed. A third public cluster will ping the database and send new results to a Telegram channel. In order to run this example you will need to create a Telegram channel and a Twitter Account to use its credentials. It contains the following components: A Twitter Camel integration that searches in the Twitter feed for results containing the word skupper (public). A PostgreSQL Camel sink that receives the data from the Twitter Camel router and sends it to the database (public). A PostgreSQL database that contains the results (private). A Telegram Camel integration that polls the database and sends the results to a Telegram channel (public). Prerequisites The kubectl command-line tool, version 1.15 or later The skupper command-line tool, the latest version Access to at least one Kubernetes cluster, from any provider you choose Kamel installation to deploy the Camel integrations per namespace. A Twitter Developer Account in order to use the Twiter API (you need to add the credentials in config.properties file) Create a Telegram Bot and Channel to publish messages (you need to add the credentials in config.properties file) Procedure Configure separate console sessions Access your clusters Set up your namespaces Install Skupper in your namespaces Check the status of your namespaces Link your namespaces Deploy and expose the database in the private cluster Create the table to store the tweets Deploy Twitter Camel Integration in the public cluster Deploy Telegram Camel integration in the public cluster Test the application Configure separate console sessions Skupper is designed for use with multiple namespaces, typically on different clusters. The skupper command uses your kubeconfig and current context to select the namespace where it operates. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. Start a console session for each of your namespaces. Set the KUBECONFIG environment variable to a different path in each session. Console for private1: Console for public1: Console for public2: Access your clusters The methods for accessing your clusters vary by Kubernetes provider. Find the instructions for your chosen providers and use them to authenticate and configure access for each console session. See the following links for more information: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift More providers Set up your namespaces Use kubectl create namespace to create the namespaces you wish to use (or use existing namespaces). Use kubectl config set-context to set the current namespace for each session. Console for private1: Console for public1: Console for public2: Install Skupper in your namespaces The skupper init command installs the Skupper router and service controller in the current namespace. Run the skupper init command in each namespace. Console for private1: Console for public1: Console for public2: Check the status of your namespaces Use skupper status in each console to check that Skupper is installed. Console for private1: Console for public1: Console for public2: You should see output like this for each namespace: As you move through the steps below, you can use skupper status at any time to check your progress. Link your namespaces Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote namespace, The skupper link create command uses the token to create a link to the namespace that generated it. Note The link token is truly a secret. Anyone who has the token can link to your namespace. Make sure that only those you trust have access to it. First, use skupper token create in one namespace to generate the token. Then, use skupper link create in the other to create a link. Console for public1: Console for public2: Console for private1: If your console sessions are on different machines, you may need to use scp or a similar tool to transfer the token. Deploy and expose the database in the private cluster Use kubectl apply to deploy the database in private1 . Then expose the deployment. Console for private1: Create the table to store the tweets Console for private1: Deploy Twitter Camel Integration in the public cluster First, we need to deploy the TwitterRoute component in Kubernetes by using kamel. This component will poll Twitter every 5000 ms for tweets that include the word skupper . Subsequently, it will send the results to the postgresql-sink , that should be installed in the same cluster as well. The kamelet sink will insert the results in the postgreSQL database. Console for public1: Deploy Telegram Camel integration in the public cluster In this step we will install the secret in Kubernetes that contains the database credentials, in order to be used by the TelegramRoute component. After that we will deploy TelegramRoute using kamel in the Kubernetes cluster. This component will poll the database every 3 seconds and gather the results inserted during the last 3 seconds. Console for public2: Test the application To be able to see the whole flow at work, you need to post a tweet containing the word skupper and after that you will see a new message in the Telegram channel with the title New feedback about Skupper Console for private1: Sample output: Console for public1: Sample output:
|
[
"kamel install",
"export KUBECONFIG=~/.kube/config-private1",
"export KUBECONFIG=~/.kube/config-public1",
"export KUBECONFIG=~/.kube/config-public2",
"create namespace private1 config set-context --current --namespace private1",
"create namespace public1 config set-context --current --namespace public1",
"create namespace public2 config set-context --current --namespace public2",
"skupper init",
"skupper init",
"skupper init",
"skupper status",
"skupper status",
"skupper status",
"Skupper is enabled for namespace \"<namespace>\" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: http://<address>:8080 The credentials for internal console-auth mode are held in secret: 'skupper-console-users'",
"skupper token create ~/public1.token --uses 2",
"skupper link create ~/public1.token skupper link status --wait 30 skupper token create ~/public2.token",
"skupper link create ~/public1.token skupper link create ~/public2.token skupper link status --wait 30",
"create -f src/main/resources/database/postgres-svc.yaml skupper expose deployment postgres --address postgres --port 5432 -n private1",
"run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgresadmin\" --env=\"PGPASSWORD=admin123\" --env=\"PGHOST=USD(kubectl get service postgres -o=jsonpath='{.spec.clusterIP}')\" -- bash psql --dbname=postgresdb CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"; CREATE TABLE tw_feedback (id uuid DEFAULT uuid_generatev4 (),sigthning VARCHAR(255),created TIMESTAMP default CURRENTTIMESTAMP,PRIMARY KEY(id));",
"src/main/resources/scripts/setUpPublic1Cluster.sh",
"src/main/resources/scripts/setUpPublic2Cluster.sh",
"attach pg-shell -c pg-shell -i -t psql --dbname=postgresdb SELECT * FROM twfeedback;",
"id | sigthning | created --------------------------------------+-----------------+---------------------------- 95655229-747a-4787-8133-923ef0a1b2ca | Testing skupper | 2022-03-10 19:35:08.412542",
"kamel logs twitter-route",
"\"[1] 2022-03-10 19:35:08,397 INFO [postgresql-sink-1] (Camel (camel-1) thread #0 - twitter-search://skupper) Testing skupper\""
] |
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/skupper_camel_integration_example
|
Chapter 14. Cephadm troubleshooting
|
Chapter 14. Cephadm troubleshooting As a storage administrator, you can troubleshoot the Red Hat Ceph Storage cluster. Sometimes there is a need to investigate why a Cephadm command failed or why a specific service does not run properly. 14.1. Pause or disable cephadm If Cephadm does not behave as expected, you can pause most of the background activity with the following command: Example This stops any changes, but Cephadm periodically checks hosts to refresh it's inventory of daemons and devices. If you want to disable Cephadm completely, run the following commands: Example Note that previously deployed daemon containers continue to exist and start as they did before. To re-enable Cephadm in the cluster, run the following commands: Example 14.2. Per service and per daemon event Cephadm stores events per service and per daemon in order to aid in debugging failed daemon deployments. These events often contain relevant information: Per service Syntax Example Per daemon Syntax Example 14.3. Check cephadm logs You can monitor the Cephadm log in real time with the following command: Example You can see the last few messages with the following command: Example If you have enabled logging to files, you can see a Cephadm log file called ceph.cephadm.log on the monitor hosts. 14.4. Gather log files You can use the journalctl command, to gather the log files for all the daemons. Note You have to run all these commands outside the cephadm shell. Note By default, Cephadm stores logs in journald which means that daemon logs are no longer available in /var/log/ceph . To read the log file of a specific daemon, run the following command: Syntax Example Note This command works when run on the same hosts where the daemon is running. To read the log file of a specific daemon running on a different host, run the following command: Syntax Example where fsid is the cluster ID provided by the ceph status command. To fetch all log files of all the daemons on a given host, run the following command: Syntax Example 14.5. Collect systemd status To print the state of a systemd unit, run the following command: Example 14.6. List all downloaded container images To list all the container images that are downloaded on a host, run the following command: Example 14.7. Manually run containers Cephadm writes small wrappers that runs a container. Refer to /var/lib/ceph/ CLUSTER_FSID / SERVICE_NAME /unit to run the container execution command. Analysing SSH errors If you get the following error: Example Try the following options to troubleshoot the issue: To ensure Cephadm has a SSH identity key, run the following command: Example If the above command fails, Cephadm does not have a key. To generate a SSH key, run the following command: Example Or Example To ensure that the SSH configuration is correct, run the following command: Example To verify the connection to the host, run the following command: Example Verify public key is in authorized_keys . To verify that the public key is in the authorized_keys file, run the following commands: Example 14.8. CIDR network error Classless inter domain routing (CIDR) also known as supernetting, is a method of assigning Internet Protocol (IP) addresses,FThe Cephadm log entries shows the current state that improves the efficiency of address distribution and replaces the system based on Class A, Class B and Class C networks. If you see one of the following errors: ERROR: Failed to infer CIDR network for mon ip * ; pass --skip-mon-network to configure it later Or Must set public_network config option or specify a CIDR network, ceph addrvec, or plain IP You need to run the following command: Example 14.9. Access the admin socket Each Ceph daemon provides an admin socket that bypasses the MONs. To access the admin socket, enter the daemon container on the host: Example 14.10. Manually deploying a mgr daemon Cephadm requires a mgr daemon in order to manage the Red Hat Ceph Storage cluster. In case the last mgr daemon of a Red Hat Ceph Storage cluster was removed, you can manually deploy a mgr daemon, on a random host of the Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example Disable the Cephadm scheduler to prevent Cephadm from removing the new MGR daemon, with the following command: Example Get or create the auth entry for the new MGR daemon: Example Open ceph.conf file: Example Get the container image: Example Create a config-json.json file and add the following: Note Use the values from the output of the ceph config generate-minimal-conf command. Example Exit from the Cephadm shell: Example Deploy the MGR daemon: Example Verification In the Cephadm shell, run the following command: Example You can see a new mgr daemon has been added.
|
[
"ceph orch pause",
"ceph orch set backend '' ceph mgr module disable cephadm",
"ceph mgr module enable cephadm ceph orch set backend cephadm",
"ceph orch ls --service_name SERVICE_NAME --format yaml",
"ceph orch ls --service_name alertmanager --format yaml service_type: alertmanager service_name: alertmanager placement: hosts: - unknown_host status: running: 1 size: 1 events: - 2021-02-01T08:58:02.741162 service:alertmanager [INFO] \"service was created\" - '2021-02-01T12:09:25.264584 service:alertmanager [ERROR] \"Failed to apply: Cannot place <AlertManagerSpec for service_name=alertmanager> on unknown_host: Unknown hosts\"'",
"ceph orch ps --service-name SERVICE_NAME --daemon-id DAEMON_ID --format yaml",
"ceph orch ps --service-name mds --daemon-id cephfs.hostname.ppdhsz --format yaml daemon_type: mds daemon_id: cephfs.hostname.ppdhsz hostname: hostname status_desc: running events: - 2021-02-01T08:59:43.845866 daemon:mds.cephfs.hostname.ppdhsz [INFO] \"Reconfigured mds.cephfs.hostname.ppdhsz on host 'hostname'\"",
"ceph -W cephadm",
"ceph log last cephadm",
"cephadm logs --name DAEMON_NAME",
"cephadm logs --name cephfs.hostname.ppdhsz",
"cephadm logs --fsid FSID --name DAEMON_NAME",
"cephadm logs --fsid 2d2fd136-6df1-11ea-ae74-002590e526e8 --name cephfs.hostname.ppdhsz",
"for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid FSID_OF_CLUSTER --name \"USDname\" > USDname; done",
"for name in USD(cephadm ls | python3 -c \"import sys, json; [print(i['name']) for i in json.load(sys.stdin)]\") ; do cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name \"USDname\" > USDname; done",
"[root@host01 ~]USD systemctl status [email protected]",
"podman ps -a --format json | jq '.[].Image' \"docker.io/library/rhel9\" \"registry.redhat.io/rhceph-alpha/rhceph-6-rhel9@sha256:9aaea414e2c263216f3cdcb7a096f57c3adf6125ec9f4b0f5f65fa8c43987155\"",
"execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-73z09u6g -i /tmp/cephadm-identity-ky7ahp_5 [email protected] raise OrchestratorError(msg) from e orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2). Please make sure that the host is reachable and accepts connections using the cephadm SSH key",
"ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_private_key INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98 INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15 obtained 'mgr/cephadm/ssh_identity_key' chmod 0600 ~/cephadm_private_key",
"chmod 0600 ~/cephadm_private_key",
"cat ~/cephadm_private_key | ceph cephadm set-ssk-key -i-",
"ceph cephadm get-ssh-config",
"ssh -F config -i ~/cephadm_private_key root@host01",
"ceph cephadm get-pub-key grep \"`cat ~/ceph.pub`\" /root/.ssh/authorized_keys",
"ceph config set host public_network hostnetwork",
"cephadm enter --name cephfs.hostname.ppdhsz ceph --admin-daemon /var/run/ceph/ceph-cephfs.hostname.ppdhsz.asok config show",
"cephadm shell",
"ceph config-key set mgr/cephadm/pause true",
"ceph auth get-or-create mgr.host01.smfvfd1 mon \"profile mgr\" osd \"allow *\" mds \"allow *\" [mgr.host01.smfvfd1] key = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==",
"ceph config generate-minimal-conf minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0 [global] fsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0 mon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]",
"ceph config get \"mgr.host01.smfvfd1\" container_image",
"{ { \"config\": \"# minimal ceph.conf for 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n[global]\\n\\tfsid = 8c9b0072-67ca-11eb-af06-001a4a0002a0\\n\\tmon_host = [v2:10.10.200.10:3300/0,v1:10.10.200.10:6789/0] [v2:10.10.10.100:3300/0,v1:10.10.200.100:6789/0]\\n\", \"keyring\": \"[mgr.Ceph5-2.smfvfd1]\\n\\tkey = AQDhcORgW8toCRAAlMzlqWXnh3cGRjqYEa9ikw==\\n\" } }",
"exit",
"cephadm --image registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest deploy --fsid 8c9b0072-67ca-11eb-af06-001a4a0002a0 --name mgr.host01.smfvfd1 --config-json config-json.json",
"ceph -s"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/cephadm-troubleshooting
|
5.6. Configuring PPP (Point-to-Point) Settings
|
5.6. Configuring PPP (Point-to-Point) Settings Authentication Methods In most cases, the provider's PPP servers supports all the allowed authentication methods. If a connection fails, the user should disable support for some methods, depending on the PPP server configuration. Use point-to-point encryption (MPPE) Microsoft Point-To-Point Encryption protocol ( RFC 3078 ). Allow BSD data compression PPP BSD Compression Protocol ( RFC 1977 ). Allow Deflate data compression PPP Deflate Protocol ( RFC 1979 ). Use TCP header compression Compressing TCP/IP Headers for Low-Speed Serial Links ( RFC 1144 ). Send PPP echo packets LCP Echo-Request and Echo-Reply Codes for loopback tests ( RFC 1661 ). Note Since the PPP support in NetworkManager is optional, to configure PPP settings, make sure that the NetworkManager-ppp package is already installed.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ppp_point-to-point_settings
|
7.84. iprutils
|
7.84. iprutils 7.84.1. RHBA-2015:1305 - iprutils bug fix and enhancement update Updated iprutils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The iprutils packages provide utilities to manage and configure Small Computer System Interface (SCSI) devices that are supported by the ipr SCSI storage device driver. Note The iprutils packages have been upgraded to upstream version 2.4.5, which provides a number of bug fixes and enhancements over the version. Notably, this update adds support for reporting cache hits on the Serial Attached SCSI (SAS) disk drive, and increases the speed of array creation for an advanced function (AF) direct-access storage device (DASD). (BZ# 1148147 ) Bug Fix BZ# 1146701 Previously, the format of firmware files was case sensitive. As a consequence, device attributes were not saved correctly for SIS-64 adapters after updating firmware with the pci.xxx file format. With this update, the firmware format is case insensitive, and device attributes are saved correctly in the described situation. Users of iprutils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-iprutils
|
Chapter 1. Get Started with Linux Containers
|
Chapter 1. Get Started with Linux Containers 1.1. Overview Linux Containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Red Hat Enterprise Linux implements Linux Containers using core technologies such as Control Groups (Cgroups) for Resource Management, Namespaces for Process Isolation, SELinux for Security, enabling secure multi-tenancy and reducing the risk of security exploits. All this is meant to provide you with an environment for producing and running enterprise-quality containers. Red Hat OpenShift provides powerful command-line and Web UI tools for building, managing and running containers in units referred to as pods . However, sometimes you might want to build and manage individual containers and images outside of OpenShift. Some tools provided to perform those tasks that run directly on RHEL systems are described in this guide. Unlike other container tools implementations, tools described here do not center around the monolithic Docker container engine and docker command. Instead, we provide a set of command-line tools that can operate without a container engine. These include: podman - For directly managing pods and container images (run, stop, start, ps, attach, exec, and so on) buildah - For building, pushing and signing container images skopeo - For copying, inspecting, deleting, and signing images runc - For providing container run and build features to podman and buildah Because these tools are compatible with the Open Container Initiative (OCI), they can be used to manage the same Linux containers that are produced and managed by Docker and other OCI-compatible container engines. However, they are especially suited to run directly on Red Hat Enterprise Linux, in single-node use cases. For a multi-node container platform, see OpenShift . Instead of relying on the single-node, daemonless tools described in this document, OpenShift requires a daemon-based container engine. Please see Using the CRI-O Container Engine for details. While this guide introduces you to container tools and images, see Managing Containers for more details on those tools. If you are still interested in using the docker command and docker service, refer to Using the docker command and service for information on how to use those features in RHEL 7. 1.2. Background Containers provide a means of packaging applications in lightweight, portable entities. Running applications within containers offers the following advantages: Smaller than Virtual Machines : Because container images include only the content needed to run an application, saving and sharing is much more efficient with containers than it is with virtual machines (which include entire operating systems) Improved performance : Likewise, since you are not running an entirely separate operating system, a container will typically run faster than an application that carries with it the overhead of a whole new virtual machine. Secure : Because a container typically has its own network interfaces, file system, and memory, the application running in that container can be isolated and secured from other activities on a host computer. Flexible : With an application's run time requirements included with the application in the container, a container is capable of being run in multiple environments. Currently, you can run containers on Red Hat Enterprise Linux 7 (RHEL 7) Server, Workstation, and Atomic Host systems. If you are unfamiliar with RHEL Atomic Host, you can learn more about it from RHEL Atomic Host 7 Installation and Configuration Guide or the upstream Project Atomic site. Project Atomic produces smaller derivatives of RPM-based Linux distributions (RHEL, Fedora, and CentOS) that is made specifically to run containers in OpenStack, VirtualBox, Linux KVM and several different cloud environments. This topic will help you get started with containers in RHEL 7 and RHEL Atomic Host. Besides offering you some hands-on ways of trying out containers, it also describes how to: Access RHEL-based container images from the Red Hat Registry Incorporate RHEL-entitled software into your containers 1.3. Supported Architectures for Containers on RHEL RHEL 7 supports container-related software for the following architectures: X86 64-bit (base and layered images) (no support for X86 32-bit) PowerPC 8 64-bit (base image and most layered images) Note Support for container-related software (podman, skopeo, buildah, and so on) was dropped in RHEL 7.7 for the PowerPC 9 64-bit, IBM s390x, and ARM 64-bit architectures. That is because the RHEL Extras repositories containing those tools is no longer available for RHEL 7.7. Not all images available for X86_64 architecture are also available for Power PC 8. Table 1 notes which Red Hat container images are supported on each architecture. Table 1.1. Red Hat container images and supported architectures Image name X86_64 PowerPC 8 rhel7/flannel Yes No rhel7/ipa-server Yes No rhel7/open-vm-tools Yes No rhel7/rhel-tools Yes No rhel7/support-tools Yes No rhel7/rhel Yes Yes rhel7-init Yes Yes rhel7/net-snmp Yes Yes rhel7/sssd Yes Yes rhel7/sadc Yes Yes rhel7/etcd Yes Yes rhel7/rsyslog Yes Yes rhel7/cockpit-ws Yes Yes rhel7-minimal / rhel7-atomic Yes Yes rhel7/openscap Yes Yes rhel7-aarch64 No No The container-related software repositories that you enable with subscription-manager are different for X86_64 and Power 8 systems. See Table 2 for the repository names to use in place of the X86_64 repository names for Power 8. Table 1.2. RHEL Server container-related software repos for Power 8 Repository Name Description Power 8 Red Hat Enterprise Linux Server rhel-7-for-power-le-rpms Red Hat Enterprise Linux 7 for IBM Power LE (RPMs) rhel-7-for-power-le-debug-rpms Red Hat Enterprise Linux 7 for IBM Power LE (Debug RPMs) rhel-7-for-power-le-source-rpms Red Hat Enterprise Linux 7 for IBM Power LE (Source RPMs) rhel-7-for-power-le-extras-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Extras (RPMs) rhel-7-for-power-le-extras-debug-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Extras (Debug RPMs) rhel-7-for-power-le-extras-source-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Extras (Source RPMs) rhel-7-for-power-le-optional-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Optional (RPMs) rhel-7-for-power-le-optional-debug-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Optional (Debug RPMs) rhel-7-for-power-le-optional-source-rpms Red Hat Enterprise Linux 7 for IBM Power LE - Optional (Source RPMs) 1.4. Getting container tools in RHEL 7 To get an environment where you can work with individual containers, you can install a Red Hat Enterprise Linux 7 system. Using the RHEL 7 subscription model, if you want to create images or containers, you must properly register and entitle the host computer on which you build them. When you use yum install within a container to add packages, the container automatically has access to entitlements available from the RHEL 7 host, so it can get RPM packages from any repository enabled on that host. Install RHEL : If you are ready to begin, you can start by installing a Red Hat Enterprise Linux system (Workstation or Server edition) as described in the following: Red Hat Enterprise Linux 7 Installation Guide Note Running containers on RHEL 7 Workstations has some limitations: Standard single-user, single-node rules apply to running containers on RHEL Workstations. Only Universal Base Image (UBI) content is supported when you build containers on RHEL workstations. In other words, you cannot include RHEL Server RPMS. You can run containers supported by third party ISVs, such as compilers. Register RHEL : Once RHEL 7 is installed, register the system. You will be prompted to enter your user name and password. Note that the user name and password are the same as your login credentials for Red Hat Customer Portal. Choose pool ID : Determine the pool ID of a subscription that includes Red Hat Enterprise Linux Server. Type the following at a shell prompt to display a list of all subscriptions that are available for your system, then attach the pool ID of one that meets that requirement: Enable repositories : Enable the following repositories, which will allow you to install the docker package and related software: NOTE : The repos shown here are for X86_64 architectures. See Supported Architectures for Containers on RHEL to learn the names of repositories for other architectures. It is possible that some Red Hat subscriptions include enabled repositories that can conflict with eachother. If you believe that has happened, before enabling the repos shown above, you can disable all repos. See the How are repositories enabled solution for information on how to disable unwanted repositories. Install packages : To install the podman , skopeo , and buildah packages, type the following: 1.5. Enabling container settings No container engine (such as Docker or CRI-O) is required for you to run containers on your local system. However, configuration settings in the /etc/containers/registries.conf file let you define access to container registries when you work with container tools such as podman and buildah . Here are example settings in the /etc/containers/registries.conf file: By default, the podman search command searches for container images from registries listed in the [registries.search]`section of the `registries.conf file in the given order. In this case, podman search looks for the requested image in registry.access.redhat.com , registry.redhat.io , and docker.io , in that order. To add access to a registry that doesn't require authentication (an insecure registry), you must add the name of that registry under the [registries.insecure] section. Any registries that you want to disallow from access from your local system need to be added under the [registries.block] section. 1.6. Using containers as root or rootless At first, root privilege (either as the root user or as a regular user with sudo privilege) was required to work with container tools in RHEL. As of RHEL 7.7, the rootless container feature (currently a Technology Preview) lets regular user accounts work with containers. All container tools described in this document can be run as root user. For restrictions on running these from regular user accounts, see the rootless containers section of the Managing Containers guide. 1.7. Working with container images Using podman, you can run, investigate, start, stop, and remove container images. If you are familiar with the docker command, you will notice that you can use the same syntax with podman to work with containers and container images. 1.7.1. Getting images from registries To get images from a remote registry (such as Red Hat's own Docker registry) and add them to your local system, use the podman pull command: The <registry> is a host that provides the registry service on TCP <port> (default: 5000). Together, <namespace> and <name> identify a particular image controlled by <namespace> at that registry. Some registries also support raw <name> ; for those, <namespace> is optional. When it is included, however, the additional level of hierarchy that <namespace> provides is useful to distinguish between images with the same <name> . For example: Namespace Examples ( <namespace> / <name> ) organization redhat/kubernetes , google/kubernetes login (user name) alice/application , bob/application role devel/database , test/database , prod/database The registries that Red Hat supports are registry.redhat.io (requiring authentication) and registry.access.redhat.com (requires no authentication, but is deprecated). For details on the transition to registry.redhat.io, see Red Hat Container Registry Authentication . Before you can pull containers from registry.redhat.io, you need to authenticate. For example: To get started with container images, you can use the pull option to pull an image from a remote registry. To pull the RHEL 7 UBI base image and rsyslog image from the Red Hat registry, type: An image is identified by a repository name (registry.access.redhat.com), a namespace name (rhel7) and the image name (rsyslog). You could also add a tag (which defaults to :latest if not entered). The repository name rhel7 , when passed to the podman pull command without the name of a registry preceding it, is ambiguous and could result in the retrieval of an image that originates from an untrusted registry. If there are multiple versions of the same image, adding a tag, such as latest to form a name such as rsyslog:latest , lets you choose the image more explicitly. To see the images that resulted from the above podman pull command, along with any other images on your system, type podman images : 1.7.2. Investigating images Using podman images you can see which images have been pulled to your local system. To look at the metadata associated with an image, use podman inspect . 1.7.2.1. Listing images To see which images have been pulled to your local system and are available to use, type: 1.7.2.2. Inspecting local images After you pull an image to your local system and before you run it, it is a good idea to investigate that image. Reasons for investigating an image before you run it include: Understanding what the image does Checking what software is inside the image The podman inspect command displays basic information about what an image does. You also have the option of mounting the image to your host system and using tools from the host to investigate what's in the image. Here is an example of investigating what a container image does before you run it: Inspect an image : Run podman inspect to see what command is executed when you run the container image, as well as other information. Here are examples of examining the ubi7/ubi and rhel7/rsyslog container images (with only snippets of information shown here): The ubi7/ubi container will execute the bash shell, if no other argument is given when you start it with podman run . If an Entrypoint were set, its value would be used instead of the Cmd value (and the value of Cmd would be used as an argument to the Entrypoint command). In the second example, the rhel7/rsyslog container image has built-in install and run labels. Those labels give an indication of how the container is meant to be set up on the system (install) and executed (run). You would use the podman command instead of docker . Mount a container : Using the podman command, mount an active container to further investigate its contents. This example runs and lists a running rsyslog container, then displays the mount point from which you can examine the contents of its file system: After running the podman mount command, the contents of the container are accessible from the listed directory on the host. Use ls to explore the contents of the image. Check the image's package list : To check the packages installed in the container, tell the rpm command to examine the packages installed on the container's mount point: 1.7.2.3. Inspecting remote images To inspect a container image before you pull it to your system, you can use the skopeo inspect command. With skopeo inspect , you can display information about an image that resides in a remote container registry. The following command inspects the rhel-init image from the Red Hat registry: 1.7.3. Tagging images You can add names to images to make it more intuitive to understand what they contain. Tagging images can also be used to identify the target registry for which the image is intended. Using the podman tag command, you essentially add an alias to the image that can consist of several parts. Those parts can include: registryhost/username/NAME:tag You can add just NAME if you like. For example: Using podman tag , the name myrhel7 now also is attached to the ubi7/ubi image (image ID 967cb403b7ee) on your system. So you could run this container by name (myrhel7) or by image ID. Notice that without adding a :tag to the name, it was assigned :latest as the tag. You could have set the tag to 7.7 as follows: To the beginning of the name, you can optionally add a user name and/or a registry name. The user name is actually the repository on Docker.io or other registry that relates to the user account that owns the repository. Here's an example of adding a user name: Above, you can see all the image names assigned to the single image ID. 1.7.4. Saving and importing images If you want to save a container image you created, you can use podman save to save the image to a tarball. After that, you can store it or send it to someone else, then reload the image later to reuse it. Here is an example of saving an image as a tarball: The myrsyslog.tar file should now be stored in your current directory. Later, when you are ready to reuse the tarball as a container image, you can import it to another podman environment as follows: 1.7.5. Removing Images To see a list of images that are on your system, run the podman images command. To remove images you no longer need, use the podman rmi command, with the image ID or name as an option. (You must stop any containers run from an image before you can remove the image.) Here is an example: You can remove multiple images on the same command line: If you want to clear out all your images, you could use a command like the following to remove all images from your local registry (make sure you mean it before you do this!): To remove images that have multiple names (tags) associated with them, you need to add the force option to remove them. For example: 1.8. Working with containers Containers represent a running or stopped process spawned from the files located in a decompressed container image. Tools for running containers and working with them are described in this section. 1.8.1. Running containers When you execute a podman run command, you essentially spin up and create a new container from a container image. The command you pass on the podman run command line sees the inside the container as its running environment so, by default, very little can be seen of the host system. For example, by default, the running applications sees: The file system provided by the container image. A new process table from inside the container (no processes from the host can be seen). If you want to make a directory from the host available to the container, map network ports from the container to the host, limit the amount of memory the container can use, or expand the CPU shares available to the container, you can do those things from the podman run command line. Here are some examples of podman run command lines that enable different features. EXAMPLE #1 (Run a quick command) : This podman command runs the cat /etc/os-release command to see the type of operating system used as the basis for the container. After the container runs the command, the container exits and is deleted ( --rm ). EXAMPLE #2 (View the Dockerfile in the container) : This is another example of running a quick command to inspect the content of a container from the host. All layered images that Red Hat provides include the Dockerfile from which they are built in /root/buildinfo . In this case you do not need to mount any volumes from the host. Now you know what the Dockerfile is called, you can list its contents: EXAMPLE #3 (Run a shell inside the container) : Using a container to launch a bash shell lets you look inside the container and change the contents. This sets the name of the container to mybash . The -i creates an interactive session and -t opens a terminal session. Without -i , the shell would open and then exit. Without -t , the shell would stay open, but you wouldn't be able to type anything to the shell. Once you run the command, you are presented with a shell prompt and you can start running commands from inside the container: Although the container is no longer running once you exit, the container still exists with the new software package still installed. Use podman ps -a to list the container: You could start that container again using podman start with the -ai options. For example: EXAMPLE #4 (Bind mounting log files) : One way to make log messages from inside a container available to the host system is to bind mount the host's /dev/log device inside the container. This example illustrates how to run an application in a RHEL container that is named log_test that generates log messages (just the logger command in this case) and directs those messages to the /dev/log device that is mounted in the container from the host. The --rm option removes the container after it runs. 1.8.2. Investigating running and stopped containers After you have some running container, you can list both those containers that are still running and those that have exited or stopped with the podman ps command. You can also use the podman inspect to look at specific pieces of information within those containers. 1.8.2.1. Listing containers Let's say you have one or more containers running on your host. To work with containers from the host system, you can open a shell and try some of the following commands. podman ps : The ps option shows all containers that are currently running: If there are containers that are not running, but were not removed (--rm option), the containers are still hanging around and can be restarted. The podman ps -a command shows all containers, running or stopped. 1.8.2.2. Inspecting containers To inspect the metadata of an existing container, use the podman inspect command. You can show all metadata or just selected metadata for the container. For example, to show all metadata for a selected container, type: You can also use inspect to pull out particular pieces of information from a container. The information is stored in a hierarchy. So to see the container's IP address (IPAddress under NetworkSettings), use the --format option and the identity of the container. For example: Examples of other pieces of information you might want to inspect include .Path (to see the command run with the container), .Args (arguments to the command), .Config.ExposedPorts (TCP or UDP ports exposed from the container), .State.Pid (to see the process id of the container) and .HostConfig.PortBindings (port mapping from container to host). Here's an example of .State.Pid and .State.StartedAt: In the first example, you can see the process ID of the containerized executable on the host system (PID 7544). The ps -ef command confirms that it is the rsyslogd daemon running. The second example shows the date and time that the container was run. 1.8.2.3. Investigating within a container To investigate within a running container, you can use the podman exec command. With podman exec , you can run a command (such as /bin/bash ) to enter a running container process to investigate that container. The reason for using podman exec , instead of just launching the container into a bash shell, is that you can investigate the container as it is running its intended application. By attaching to the container as it is performing its intended task, you get a better view of what the container actually does, without necessarily interrupting the container's activity. Here is an example using podman exec to look into a running rsyslog, then look around inside that container. Launch a container : Launch a container such the rsyslog container image described earlier. Type podman ps to make sure it is running: Enter the container with podman exec : Use the container ID or name to open a bash shell to access the running container. Then you can investigate the attributes of the container as follows: The commands just run from the bash shell (running inside the container) show you several things. The container was built from a RHEL release 7.7 image. The process table (ps -ef) shows that the /usr/sbin/rsyslogd command is process ID 1. Processes running in the host's process table cannot be seen from within the container. Although the rsyslogd process can be seen on the host process table (it was process ID 7544 on the host). There is no separate kernel running in the container (uname -r shows the host system's kernel). The rpm -qa command lets you see the RPM packages that are included inside the container. In other words, there is an RPM database inside of the container. Viewing memory (free -m) shows the available memory on the host (although what the container can actually use can be limited using cgroups). 1.8.3. Starting and stopping containers If you ran a container, but didn't remove it (--rm), that container is stored on your local system and ready to run again. To start a previously run container that wasn't removed, use the start option. To stop a running container, use the stop option. 1.8.3.1. Starting containers A container that doesn't need to run interactively can sometimes be restarted after being stopped with only the start option and the container ID or name. For example: To start a container so you can work with it from the local shell, use the -a (attach) and -i (interactive) options. Once the bash shell starts, run the commands you want inside the container and type exit to kill the shell and stop the container. 1.8.3.2. Stopping containers To stop a running container that is not attached to a terminal session, use the stop option and the container ID or number. For example: The stop option sends a SIGTERM signal to terminate a running container. If the container doesn't stop after a grace period (10 seconds by default), podman sends a SIGKILL signal. You could also use the podman kill command to kill a container (SIGKILL) or send a different signal to a container. Here's an example of sending a SIGHUP signal to a container (if supported by the application, a SIGHUP causes the application to re-read its configuration files): 1.8.4. Removing containers To see a list of containers that are still hanging around your system, run the podman ps -a command. To remove containers you no longer need, use the podman rm command, with the container ID or name as an option. Here is an example: You can remove multiple containers on the same command line: If you want to clear out all your containers, you could use a command like the following to remove all containers (not images) from your local system (make sure you mean it before you do this!):
|
[
"subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: ******** Password: **********",
"subscription-manager list --available Find valid RHEL pool ID subscription-manager attach --pool=pool_id",
"subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-optional-rpms",
"yum install podman skopeo buildah -y",
"[registries.search] registries = ['registry.access.redhat.com', 'registry.redhat.io', 'docker.io'] [registries.insecure] registries = [] [registries.block] registries = []",
"podman pull <registry>[:<port>]/[<namespace>/]<name>:<tag>",
"podman login registry.redhat.io Username: myusername Password: ************ Login Succeeded!",
"podman pull registry.access.redhat.com/ubi7/ubi podman pull registry.access.redhat.com/rhel7/rsyslog",
"podman images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE registry.access.redhat.com/rhel7/rsyslog latest 39ec6b2004a3 9 days ago 236 MB registry.access.redhat.com/ubi7/ubi latest 967cb403b7ee 3 days ago 215 MB",
"podman images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE registry.access.redhat.com/rhel7/rsyslog latest 39ec6b2004a3 10 days ago 236 MB registry.access.redhat.com/ubi7/ubi latest 967cb403b7ee 10 days ago 215 MB registry.access.redhat.com/rhscl/postgresql-10-rhel7 1-35 27b15d85ca6b 3 months ago 336 MB",
"podman inspect registry.access.redhat.com/ubi7/ubi Cmd\": [ \"/bin/bash\" ], \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.access.redhat.com\", \"build-date\": \"2019-08-01T09:28:54.576292\", \"com.redhat.build-host\": \"cpt-1007.osbs.prod.upshift.rdu2.redhat.com\", \"com.redhat.component\": \"ubi7-container\", \"com.redhat.license_terms\": \"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\", \"description\": \"The Universal Base Image is designed and engineered to be",
"podman inspect registry.access.redhat.com/rhel7/rsyslog \"Cmd\": [ \"/bin/rsyslog.sh\" ], \"Labels\": { \"License\": \"GPLv3\", \"architecture\": \"x86_64\", \"install\": \"docker run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=IMAGE -e NAME=NAME IMAGE /bin/install.sh\", \"run\": \"docker run -d --privileged --name NAME --net=host --pid=host -v /etc/pki/rsyslog:/etc/pki/rsyslog -v /etc/rsyslog.conf:/etc/rsyslog.conf -v /etc/sysconfig/rsyslog:/etc/sysconfig/rsyslog -v /etc/rsyslog.d:/etc/rsyslog.d -v /var/log:/var/log -v /var/lib/rsyslog:/var/lib/rsyslog -v /run:/run -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -e IMAGE=IMAGE -e NAME=NAME --restart=always IMAGE /bin/rsyslog.sh\", \"summary\": \"A containerized version of the rsyslog utility",
"podman run -d registry.access.redhat.com/rhel7/rsyslog podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1cc92aea398d ...rsyslog:latest /bin/rsyslog.sh 37 minutes ago Up 1 day ago myrsyslog podman mount 1cc92aea398d /var/lib/containers/storage/overlay/65881e78.../merged ls /var/lib/containers/storage/overlay/65881e78*/merged bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var",
"rpm -qa --root=/var/lib/containers/storage/overlay/65881e78.../merged redhat-release-server-7.6-4.el7.x86_64 filesystem-3.2-25.el7.x86_64 basesystem-10.0-7.el7.noarch ncurses-base-5.9-14.20130511.el7_4.noarch glibc-common-2.17-260.el7.x86_64 nspr-4.19.0-1.el7_5.x86_64 libstdc++-4.8.5-36.el7.x86_64",
"skopeo inspect docker://registry.access.redhat.com/ubi7/ubi { \"Name\": \"registry.access.redhat.com/ubi7/ubi\", \"Digest\": \"sha256:caf8d01ac73911f872d184a73a5e72a1eeb7bba733cbad13e8253b567d16899f\", \"RepoTags\": [ \"latest\", \"7.6\", \"7.7\" ], \"Created\": \"2019-08-01T09:29:20.753891Z\", \"DockerVersion\": \"1.13.1\", \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.access.redhat.com\", \"build-date\": \"2019-08-01T09:28:54.576292\", \"com.redhat.build-host\": \"cpt-1007.osbs.prod.upshift.rdu2.redhat.com\", \"com.redhat.component\": \"ubi7-container\",",
"podman tag 967cb403b7ee myrhel7",
"podman tag 967cb403b7ee myrhel7:7.7",
"podman tag 967cb403b7ee jsmith/myrhel7 podman images | grep 967cb403b7ee localhost/jsmith/myrhel7 latest 967cb403b7ee 10 days ago 215 MB localhost/myrhel7 7.7 967cb403b7ee 10 days ago 215 MB registry.access.redhat.com/ubi7/ubi latest 967cb403b7ee 10 days ago 215 MB",
"podman save -o myrsyslog.tar registry.access.redhat.com/rhel7/rsyslog ls myrsyslog.tar",
"cat myrsyslog.tar | podman import - rhel7/myrsyslog Getting image source signatures Copying blob baa75547a1df done Copying config 6722efbc0c done Writing manifest to image destination Storing signatures 6722efbc0ce5591161f773ef6371390965dc54212ac710c7517ee6d55eee6485 podman images | grep myrsyslog docker.io/rhel7/myrsyslog latest 6722efbc0ce5 50 seconds ago 236 MB",
"podman rmi rhel-init 7e85c34f126351ccb9d24e492488ba7e49820be08fe53bee02301226f2773293",
"podman rmi registry.access.redhat.com/rhel7/rsyslog support-tools 46da8e23fa1461b658f9276191b4f473f366759a6c840805ed0c9ff694aa7c2f 85cfba5cd49c84786c773a9f66b8d6fca04582d5d7b921a308f04bb8ec071205",
"podman rmi USD(podman images -a -q) 1ca061b47bd70141d11dcb2272dee0f9ea3f76e9afd71cd121a000f3f5423731 ed904b8f2d5c1b5502dea190977e066b4f76776b98f6d5aa1e389256d5212993 83508706ef1b603e511b1b19afcb5faab565053559942db5d00415fb1ee21e96",
"podman rmi USD(podman images -a -q) unable to delete eb205f07ce7d0bb63bfe5603ef8964648536963e2eee51a3ebddf6cfe62985f7 (must force) - image is referred to in multiple tags unable to delete eb205f07ce7d0bb63bfe5603ef8964648536963e2eee51a3ebddf6cfe62985f7 (must force) - image is referred to in multiple tags podman rmi -f eb205f07ce7d eb205f07ce7d0bb63bfe5603ef8964648536963e2eee51a3ebddf6cfe62985f7",
"podman run --rm registry.access.redhat.com/ubi7/ubi cat /etc/os-release NAME=\"Red Hat Enterprise Linux Server\" VERSION=\"7.7 (Maipo)\" ID=\"rhel\" ID_LIKE=\"fedora\" VARIANT=\"Server\" VARIANT_ID=\"server\" VERSION_ID=\"7.7\" PRETTY_NAME=\"Red Hat Enterprise Linux Server 7.7 (Maipo)\"",
"podman run --rm registry.access.redhat.com/ubi7/ubi ls /root/buildinfo Dockerfile-ubi7-7.7-99",
"podman run --rm registry.access.redhat.com/ubi7/ubi cat /root/buildinfo/Dockerfile-ubi7-7.7-99 FROM sha256:94577870ec362083c6513cfadb00672557fc5dd360e67befde6c81b9b753d06e RUN mv -f /etc/yum.repos.d/ubi.repo /tmp || : MAINTAINER Red Hat, Inc. LABEL com.redhat.component=\"ubi7-container\" LABEL name=\"ubi7\" LABEL version=\"7.7\"",
"podman run --name=mybash -it registry.access.redhat.com/ubi7/ubi /bin/bash yum install nmap ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 00:46 pts/0 00:00:00 /bin/bash root 35 1 0 00:51 pts/0 00:00:00 ps -ef exit",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA 1ca061b47bd7 .../ubi8/ubi:latest /bin/bash 8 minutes ago Exited 12 seconds ago musing_brown false",
"podman start -ai mybash",
"podman run --name=\"log_test\" -v /dev/log:/dev/log --rm registry.access.redhat.com/ubi7/ubi logger \"Testing logging to the host\" journalctl -b | grep Testing Aug 11 18:22:48 rhel76_01 root[6237]: Testing logging to the host",
"podman run -d registry.access.redhat.com/rhel7/rsyslog podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 50e5021715e4 rsyslog:latest /bin/rsyslog.sh 5 seconds ago Up 3 seconds ago epic_torvalds",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 86a6f6962042 ubi7/ubi:latest /bin/bash 11 minutes ago Exited (0) 6 minutes ago mybash",
"podman inspect 50e5021715e4 [ { \"Id\": \"50e5021715e4829a3a37255145056ba0dc634892611a2f1d71c647cf9c9aa1d5\", \"Created\": \"2019-08-11T18:27:55.493059669-04:00\", \"Path\": \"/bin/rsyslog.sh\", \"Args\": [ \"/bin/rsyslog.sh\" ], \"State\": { \"OciVersion\": \"1.0.1-dev\", \"Status\": \"running\", \"Running\": true,",
"podman inspect --format='{{.NetworkSettings.IPAddress}}' 50e5021715e4 10.88.0.36",
"podman inspect --format='{{.State.Pid}}' 50e5021715e4 7544 ps -ef | grep 7544 root 7544 7531 0 10:30 ? 00:00:00 /usr/sbin/rsyslogd -n podman inspect --format='{{.State.StartedAt}}' 50e5021715e4 2019-08-11 18:27:57.946930227 -0400 EDT",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 50e5021715e4 rsyslog:latest \"/usr/rsyslog.sh 6 minutes ago Up 6 minutes rsyslog",
"podman exec -it 50e5021715e4 /bin/bash cat /etc/redhat-release Red Hat Enterprise Linux release 7.7 (Maipo) ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 15:30 ? 00:00:00 /usr/sbin/rsyslogd -n root 8 0 6 16:01 pts/0 00:00:00 /bin/bash root 21 8 0 16:01 pts/0 00:00:00 ps -ef df -h Filesystem Size Used Avail Use% Mounted on overlay 39G 2.5G 37G 7% / tmpfs 64M 0 64M 0% /dev tmpfs 1.5G 8.7M 1.5G 1% /etc/hosts shm 63M 0 63M 0% /dev/shm tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup tmpfs 1.5G 0 1.5G 0% /proc/acpi tmpfs 1.5G 0 1.5G 0% /proc/scsi tmpfs 1.5G 0 1.5G 0% /sys/firmware tmpfs 1.5G 0 1.5G 0% /sys/fs/selinux uname -r 3.10.0-957.27.2.el7.x86_64 rpm -qa | more tzdata-2019b-1.el7.noarch setup-2.8.71-10.el7.noarch basesystem-10.0-7.el7.noarch ncurses-base-5.9-14.20130511.el7_4.noarch bash-4.2# free -m total used free shared buff/cache available Mem: 7792 2305 1712 18 3774 5170 Swap: 2047 0 2047 exit",
"podman start myrhel_httpd myrhel_httpd",
"podman start -a -i agitated_hopper exit",
"podman stop 74b1da000a11 74b1da000a114015886c557deec8bed9dfb80c888097aa83f30ca4074ff55fb2",
"podman kill --signal=\"SIGHUP\" 74b1da000a11 74b1da000a114015886c557deec8bed9dfb80c888097aa83f30ca4074ff55fb2",
"podman rm goofy_wozniak",
"podman rm clever_yonath furious_shockley drunk_newton",
"podman rm USD(podman ps -a -q)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/get_started_with_linux_containers
|
High Availability Guide
|
High Availability Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
|
[
"aws ec2 create-vpc --cidr-block 192.168.0.0/16 --tag-specifications \"ResourceType=vpc, Tags=[{Key=AuroraCluster,Value=keycloak-aurora}]\" \\ 1 --region eu-west-1",
"{ \"Vpc\": { \"CidrBlock\": \"192.168.0.0/16\", \"DhcpOptionsId\": \"dopt-0bae7798158bc344f\", \"State\": \"pending\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"InstanceTenancy\": \"default\", \"Ipv6CidrBlockAssociationSet\": [], \"CidrBlockAssociationSet\": [ { \"AssociationId\": \"vpc-cidr-assoc-09a02a83059ba5ab6\", \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockState\": { \"State\": \"associated\" } } ], \"IsDefault\": false } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1a\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.0.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1a\", \"AvailabilityZoneId\": \"euw1-az3\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.0.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-0d491a1a798aa878d\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-0d491a1a798aa878d\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1b\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.32.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1b\", \"AvailabilityZoneId\": \"euw1-az1\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.32.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-057181b1e3728530e\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-057181b1e3728530e\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"RouteTables\": [ { \"Associations\": [ { \"Main\": true, \"RouteTableAssociationId\": \"rtbassoc-02dfa06f4c7b4f99a\", \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"AssociationState\": { \"State\": \"associated\" } } ], \"PropagatingVgws\": [], \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"Routes\": [ { \"DestinationCidrBlock\": \"192.168.0.0/16\", \"GatewayId\": \"local\", \"Origin\": \"CreateRouteTable\", \"State\": \"active\" } ], \"Tags\": [], \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\" } ] }",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-0d491a1a798aa878d --region eu-west-1",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-057181b1e3728530e --region eu-west-1",
"aws rds create-db-subnet-group --db-subnet-group-name keycloak-aurora-subnet-group --db-subnet-group-description \"Aurora DB Subnet Group\" --subnet-ids subnet-0d491a1a798aa878d subnet-057181b1e3728530e --region eu-west-1",
"aws ec2 create-security-group --group-name keycloak-aurora-security-group --description \"Aurora DB Security Group\" --vpc-id vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"GroupId\": \"sg-0d746cc8ad8d2e63b\" }",
"aws rds create-db-cluster --db-cluster-identifier keycloak-aurora --database-name keycloak --engine aurora-postgresql --engine-version USD{properties[\"aurora-postgresql.version\"]} --master-username keycloak --master-user-password secret99 --vpc-security-group-ids sg-0d746cc8ad8d2e63b --db-subnet-group-name keycloak-aurora-subnet-group --region eu-west-1",
"{ \"DBCluster\": { \"AllocatedStorage\": 1, \"AvailabilityZones\": [ \"eu-west-1b\", \"eu-west-1c\", \"eu-west-1a\" ], \"BackupRetentionPeriod\": 1, \"DatabaseName\": \"keycloak\", \"DBClusterIdentifier\": \"keycloak-aurora\", \"DBClusterParameterGroup\": \"default.aurora-postgresql15\", \"DBSubnetGroup\": \"keycloak-aurora-subnet-group\", \"Status\": \"creating\", \"Endpoint\": \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"ReaderEndpoint\": \"keycloak-aurora.cluster-ro-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"MultiAZ\": false, \"Engine\": \"aurora-postgresql\", \"EngineVersion\": \"15.3\", \"Port\": 5432, \"MasterUsername\": \"keycloak\", \"PreferredBackupWindow\": \"02:21-02:51\", \"PreferredMaintenanceWindow\": \"fri:03:34-fri:04:04\", \"ReadReplicaIdentifiers\": [], \"DBClusterMembers\": [], \"VpcSecurityGroups\": [ { \"VpcSecurityGroupId\": \"sg-0d746cc8ad8d2e63b\", \"Status\": \"active\" } ], \"HostedZoneId\": \"Z29XKXDKYMONMX\", \"StorageEncrypted\": false, \"DbClusterResourceId\": \"cluster-IBWXUWQYM3MS5BH557ZJ6ZQU4I\", \"DBClusterArn\": \"arn:aws:rds:eu-west-1:606671647913:cluster:keycloak-aurora\", \"AssociatedRoles\": [], \"IAMDatabaseAuthenticationEnabled\": false, \"ClusterCreateTime\": \"2023-11-01T10:40:45.964000+00:00\", \"EngineMode\": \"provisioned\", \"DeletionProtection\": false, \"HttpEndpointEnabled\": false, \"CopyTagsToSnapshot\": false, \"CrossAccountClone\": false, \"DomainMemberships\": [], \"TagList\": [], \"AutoMinorVersionUpgrade\": true, \"NetworkType\": \"IPV4\" } }",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-1\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-2\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-1 --region eu-west-1 aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-2 --region eu-west-1",
"aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text",
"[ \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\" ]",
"aws ec2 describe-vpcs --filters \"Name=tag:AuroraCluster,Values=keycloak-aurora\" --query 'Vpcs[*].VpcId' --region eu-west-1 --output text",
"vpc-0b40bd7c59dbe4277",
"NODE=USD(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') aws ec2 describe-instances --filters \"Name=private-dns-name,Values=USD{NODE}\" --query 'Reservations[0].Instances[0].VpcId' --region eu-west-1 --output text",
"vpc-0b721449398429559",
"aws ec2 create-vpc-peering-connection --vpc-id vpc-0b721449398429559 \\ 1 --peer-vpc-id vpc-0b40bd7c59dbe4277 \\ 2 --peer-region eu-west-1 --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"OwnerId\": \"606671647913\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"ExpirationTime\": \"2023-11-08T13:26:30+00:00\", \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"initiating-request\", \"Message\": \"Initiating Request to 606671647913\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"aws ec2 wait vpc-peering-connection-exists --vpc-peering-connection-ids pcx-0cb23d66dea3dca9f",
"aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockSet\": [ { \"CidrBlock\": \"192.168.0.0/16\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"provisioning\", \"Message\": \"Provisioning\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"ROSA_PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 describe-route-tables --filters \"Name=vpc-id,Values=vpc-0b721449398429559\" \"Name=association.main,Values=true\" \\ 1 --query \"RouteTables[*].RouteTableId\" --output text --region eu-west-1 ) aws ec2 create-route --route-table-id USD{ROSA_PUBLIC_ROUTE_TABLE_ID} --destination-cidr-block 192.168.0.0/16 \\ 2 --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"AURORA_SECURITY_GROUP_ID=USD(aws ec2 describe-security-groups --filters \"Name=group-name,Values=keycloak-aurora-security-group\" --query \"SecurityGroups[*].GroupId\" --region eu-west-1 --output text ) aws ec2 authorize-security-group-ingress --group-id USD{AURORA_SECURITY_GROUP_ID} --protocol tcp --port 5432 --cidr 10.0.17.0/24 \\ 1 --region eu-west-1",
"{ \"Return\": true, \"SecurityGroupRules\": [ { \"SecurityGroupRuleId\": \"sgr-0785d2f04b9cec3f5\", \"GroupId\": \"sg-0d746cc8ad8d2e63b\", \"GroupOwnerId\": \"606671647913\", \"IsEgress\": false, \"IpProtocol\": \"tcp\", \"FromPort\": 5432, \"ToPort\": 5432, \"CidrIpv4\": \"10.0.17.0/24\" } ] }",
"USER=keycloak 1 PASSWORD=secret99 2 DATABASE=keycloak 3 HOST=USD(aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora \\ 4 --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text ) run -i --tty --rm debug --image=postgres:15 --restart=Never -- psql postgresql://USD{USER}:USD{PASSWORD}@USD{HOST}/USD{DATABASE}",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: hostname: hostname: <KEYCLOAK_URL_HERE> resources: requests: cpu: \"2\" memory: \"1250M\" limits: cpu: \"6\" memory: \"2250M\" db: vendor: postgres url: jdbc:aws-wrapper:postgresql://<AWS_AURORA_URL_HERE>:5432/keycloak poolMinSize: 30 1 poolInitialSize: 30 poolMaxSize: 30 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password image: <KEYCLOAK_IMAGE_HERE> 2 startOptimized: false 3 features: enabled: - multi-site 4 transaction: xaEnabled: false 5 additionalOptions: - name: http-max-queued-requests value: \"1000\" - name: log-console-output value: json - name: metrics-enabled 6 value: 'true' - name: http-pool-max-threads 7 value: \"66\" - name: db-driver value: software.amazon.jdbc.Driver http: tlsSecret: keycloak-tls-secret instances: 3",
"wait --for=condition=Ready keycloaks.k8s.keycloak.org/keycloak wait --for=condition=RollingUpdate=False keycloaks.k8s.keycloak.org/keycloak",
"spec: additionalOptions: - name: http-max-queued-requests value: \"1000\"",
"spec: ingress: enabled: true annotations: # When running load tests, disable sticky sessions on the OpenShift HAProxy router # to avoid receiving all requests on a single Red Hat build of Keycloak Pod. haproxy.router.openshift.io/balance: roundrobin haproxy.router.openshift.io/disable_cookies: 'true'",
"credentials: - username: developer password: strong-password roles: - admin",
"apiVersion: v1 kind: Secret type: Opaque metadata: name: connect-secret namespace: keycloak data: identities.yaml: Y3JlZGVudGlhbHM6CiAgLSB1c2VybmFtZTogZGV2ZWxvcGVyCiAgICBwYXNzd29yZDogc3Ryb25nLXBhc3N3b3JkCiAgICByb2xlczoKICAgICAgLSBhZG1pbgo= 1",
"create secret generic connect-secret --from-file=identities.yaml",
"apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: \"xsite-sa\" 2 type: kubernetes.io/service-account-token",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-A-token.txt",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-B-token.txt",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-B-token.txt)\"",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-A-token.txt)\"",
"-n keycloak create secret generic xsite-keystore-secret --from-file=keystore.p12=\"./certs/keystore.p12\" \\ 1 --from-literal=password=secret \\ 2 --from-literal=type=pkcs12 3",
"-n keycloak create secret generic xsite-truststore-secret --from-file=truststore.p12=\"./certs/truststore.p12\" \\ 1 --from-literal=password=caSecret \\ 2 --from-literal=type=pkcs12 3",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-a 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-b 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-b 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-b 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-a 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-a 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-b: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-a: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"wait --for condition=WellFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"wait --for condition=CrossSiteViewFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"apiVersion: v1 kind: Secret metadata: name: remote-store-secret namespace: keycloak type: Opaque data: username: ZGV2ZWxvcGVy # base64 encoding for 'developer' password: c2VjdXJlX3Bhc3N3b3Jk # base64 encoding for 'secure_password'",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: additionalOptions: - name: cache-remote-host 1 value: \"infinispan.keycloak.svc\" - name: cache-remote-port 2 value: \"11222\" - name: cache-remote-username 3 secret: name: remote-store-secret key: username - name: cache-remote-password 4 secret: name: remote-store-secret key: password - name: spi-connections-infinispan-quarkus-site-name 5 value: keycloak",
"HOSTNAME=USD(oc -n openshift-ingress get svc router-default -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='USD{HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}\" --region eu-west-1 \\ 1 --output json",
"[ { \"CanonicalHostedZoneId\": \"Z2IFOLAFXWLO4F\", \"DNSName\": \"ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com\" } ]",
"function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=(USD(echo USD1 | sha1sum )) aws route53 create-health-check --caller-reference \"USDREF\" --query \"HealthCheck.Id\" --no-cli-pager --output text --health-check-config ' { \"Type\": \"HTTPS\", \"ResourcePath\": \"/lb-check\", \"FullyQualifiedDomainName\": \"'USD1'\", \"Port\": 443, \"RequestInterval\": 30, \"FailureThreshold\": 1, \"EnableSNI\": true } ' } CLIENT_DOMAIN=\"client.keycloak-benchmark.com\" 1 PRIMARY_DOMAIN=\"primary.USD{CLIENT_DOMAIN}\" 2 BACKUP_DOMAIN=\"backup.USD{CLIENT_DOMAIN}\" 3 createHealthCheck USD{PRIMARY_DOMAIN} createHealthCheck USD{BACKUP_DOMAIN}",
"233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2",
"HOSTED_ZONE_ID=\"Z09084361B6LKQQRCVBEY\" 1 PRIMARY_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets --hosted-zone-id Z09084361B6LKQQRCVBEY --query \"ChangeInfo.Id\" --output text --change-batch ' { \"Comment\": \"Creating Record Set for 'USD{CLIENT_DOMAIN}'\", \"Changes\": [{ \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{PRIMARY_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{PRIMARY_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{BACKUP_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{BACKUP_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-primary-'USD{SUBDOMAIN}'\", \"Failover\": \"PRIMARY\", \"HealthCheckId\": \"'USD{PRIMARY_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-backup-'USD{SUBDOMAIN}'\", \"Failover\": \"SECONDARY\", \"HealthCheckId\": \"'USD{BACKUP_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }] } '",
"/change/C053410633T95FR9WN3YI",
"aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USD{CLIENT_DOMAIN} 1",
"cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: USDDOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"aws rds failover-db-cluster --db-cluster-identifier",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-b",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-a",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"aws rds failover-db-cluster --db-cluster-identifier",
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: take-offline namespace: keycloak 1 spec: cluster: infinispan 2 config: | 3 site take-offline --all-caches --site=site-a site status --all-caches --site=site-a",
"-n keycloak wait --for=jsonpath='{.status.phase}'=Succeeded Batch/take-offline"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/high_availability_guide/%7Blinks_server_all-config_url%7D?q=db-pool
|
Builds using Shipwright
|
Builds using Shipwright OpenShift Container Platform 4.18 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_shipwright/index
|
Chapter 5. Known Issues
|
Chapter 5. Known Issues If you upgrade VMs on Azure cloud launched from the "RHEL for SAP" (a discontinued offer) image of "gen1", and see an error similar to below, please ensure that /etc/hosts doesn't contain a line X.X.X.X rhui*.microsoft.com . This is an artifact IP address of Azure RHUI Content Distribution Server (CDS) instance to fetch the content from. Error: Stderr: Host and machine ids are equal (hash): refusing to link journals Failed to synchronize cache for repo 'rhel-8-for-x86_64-appstream-eus-rhui-rpms', ignoring this repo. Failed to synchronize cache for repo 'microsoft-azure-rhel8-sapapps', ignoring this repo. Error: Unable to find a match: rhui-azure-rhel8-sapapps or Stderr: Host and machine ids are equal (hash): refusing to link journals Failed to synchronize cache for repo 'rhel-8-for-x86_64-appstream-e4s-rhui-rpms', ignoring this repo. Failed to synchronize cache for repo 'microsoft-azure-rhel8-sap-ha', ignoring this repo. Error: Unable to find a match: rhui-azure-rhel8-sap-ha
|
[
"Stderr: Host and machine ids are equal (hash): refusing to link journals Failed to synchronize cache for repo 'rhel-8-for-x86_64-appstream-eus-rhui-rpms', ignoring this repo. Failed to synchronize cache for repo 'microsoft-azure-rhel8-sapapps', ignoring this repo. Error: Unable to find a match: rhui-azure-rhel8-sapapps",
"Stderr: Host and machine ids are equal (hash): refusing to link journals Failed to synchronize cache for repo 'rhel-8-for-x86_64-appstream-e4s-rhui-rpms', ignoring this repo. Failed to synchronize cache for repo 'microsoft-azure-rhel8-sap-ha', ignoring this repo. Error: Unable to find a match: rhui-azure-rhel8-sap-ha"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/asmb_known_issues_asmb_upgrading_netweaver
|
Chapter 4. Endpoint authentication mechanisms
|
Chapter 4. Endpoint authentication mechanisms Data Grid Server can use custom SASL and HTTP authentication mechanisms for Hot Rod and REST endpoints. 4.1. Data Grid Server authentication Authentication restricts user access to endpoints as well as the Data Grid Console and Command Line Interface (CLI). Data Grid Server includes a "default" security realm that enforces user authentication. Default authentication uses a property realm with user credentials stored in the server/conf/users.properties file. Data Grid Server also enables security authorization by default so you must assign users with permissions stored in the server/conf/groups.properties file. Tip Use the user create command with the Command Line Interface (CLI) to add users and assign permissions. Run user create --help for examples and more information. 4.2. Configuring Data Grid Server authentication mechanisms You can explicitly configure Hot Rod and REST endpoints to use specific authentication mechanisms. Configuring authentication mechanisms is required only if you need to explicitly override the default mechanisms for a security realm. Note Each endpoint section in your configuration must include hotrod-connector and rest-connector elements or fields. For example, if you explicitly declare a hotrod-connector you must also declare a rest-connector even if it does not configure an authentication mechanism. Prerequisites Add security realms to your Data Grid Server configuration as required. Procedure Open your Data Grid Server configuration for editing. Add an endpoint element or field and specify the security realm that it uses with the security-realm attribute. Add a hotrod-connector element or field to configure the Hot Rod endpoint. Add an authentication element or field. Specify SASL authentication mechanisms for the Hot Rod endpoint to use with the sasl mechanisms attribute. If applicable, specify SASL quality of protection settings with the qop attribute. Specify the Data Grid Server identity with the server-name attribute if necessary. Add a rest-connector element or field to configure the REST endpoint. Add an authentication element or field. Specify HTTP authentication mechanisms for the REST endpoint to use with the mechanisms attribute. Save the changes to your configuration. Authentication mechanism configuration The following configuration specifies SASL mechanisms for the Hot Rod endpoint to use for authentication: XML <server xmlns="urn:infinispan:server:14.0"> <endpoints> <endpoint socket-binding="default" security-realm="my-realm"> <hotrod-connector> <authentication> <sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN" server-name="infinispan" qop="auth"/> </authentication> </hotrod-connector> <rest-connector> <authentication mechanisms="DIGEST BASIC"/> </rest-connector> </endpoint> </endpoints> </server> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "my-realm", "hotrod-connector": { "authentication": { "security-realm": "default", "sasl": { "server-name": "infinispan", "mechanisms": ["SCRAM-SHA-512", "SCRAM-SHA-384", "SCRAM-SHA-256", "SCRAM-SHA-1", "DIGEST-SHA-512", "DIGEST-SHA-384", "DIGEST-SHA-256", "DIGEST-SHA", "DIGEST-MD5", "PLAIN"], "qop": ["auth"] } } }, "rest-connector": { "authentication": { "mechanisms": ["DIGEST", "BASIC"], "security-realm": "default" } } } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "my-realm" hotrodConnector: authentication: securityRealm: "default" sasl: serverName: "infinispan" mechanisms: - "SCRAM-SHA-512" - "SCRAM-SHA-384" - "SCRAM-SHA-256" - "SCRAM-SHA-1" - "DIGEST-SHA-512" - "DIGEST-SHA-384" - "DIGEST-SHA-256" - "DIGEST-SHA" - "DIGEST-MD5" - "PLAIN" qop: - "auth" restConnector: authentication: mechanisms: - "DIGEST" - "BASIC" securityRealm: "default" 4.2.1. Disabling authentication In local development environments or on isolated networks you can configure Data Grid to allow unauthenticated client requests. When you disable user authentication you should also disable authorization in your Data Grid security configuration. Procedure Open your Data Grid Server configuration for editing. Remove the security-realm attribute from the endpoints element or field. Remove any authorization elements from the security configuration for the cache-container and each cache configuration. Save the changes to your configuration. XML <server xmlns="urn:infinispan:server:14.0"> <endpoints socket-binding="default"/> </server> JSON { "server": { "endpoints": { "endpoint": { "socket-binding": "default" } } } } YAML server: endpoints: endpoint: socketBinding: "default" 4.3. Data Grid Server authentication mechanisms Data Grid Server automatically configures endpoints with authentication mechanisms that match your security realm configuration. For example, if you add a Kerberos security realm then Data Grid Server enables the GSSAPI and GS2-KRB5 authentication mechanisms for the Hot Rod endpoint. Important Currently, you cannot use the Lightweight Directory Access Protocol (LDAP) protocol with the DIGEST or SCRAM authentication mechanisms, because these mechanisms require access to specific hashed passwords. Hot Rod endpoints Data Grid Server enables the following SASL authentication mechanisms for Hot Rod endpoints when your configuration includes the corresponding security realm: Security realm SASL authentication mechanism Property realms and LDAP realms SCRAM-*, DIGEST-*, SCRAM-* Token realms OAUTHBEARER Trust realms EXTERNAL Kerberos identities GSSAPI, GS2-KRB5 SSL/TLS identities PLAIN REST endpoints Data Grid Server enables the following HTTP authentication mechanisms for REST endpoints when your configuration includes the corresponding security realm: Security realm HTTP authentication mechanism Property realms and LDAP realms DIGEST Token realms BEARER_TOKEN Trust realms CLIENT_CERT Kerberos identities SPNEGO SSL/TLS identities BASIC 4.3.1. SASL authentication mechanisms Data Grid Server supports the following SASL authentications mechanisms with Hot Rod endpoints: Authentication mechanism Description Security realm type Related details PLAIN Uses credentials in plain-text format. You should use PLAIN authentication with encrypted connections only. Property realms and LDAP realms Similar to the BASIC HTTP mechanism. DIGEST-* Uses hashing algorithms and nonce values. Hot Rod connectors support DIGEST-MD5 , DIGEST-SHA , DIGEST-SHA-256 , DIGEST-SHA-384 , and DIGEST-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. SCRAM-* Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support SCRAM-SHA , SCRAM-SHA-256 , SCRAM-SHA-384 , and SCRAM-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. GSSAPI Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. GS2-KRB5 Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. EXTERNAL Uses client certificates. Trust store realms Similar to the CLIENT_CERT HTTP mechanism. OAUTHBEARER Uses OAuth tokens and requires a token-realm configuration. Token realms Similar to the BEARER_TOKEN HTTP mechanism. 4.3.2. SASL quality of protection (QoP) If SASL mechanisms support integrity and privacy protection (QoP) settings, you can add them to your Hot Rod endpoint configuration with the qop attribute. QoP setting Description auth Authentication only. auth-int Authentication with integrity protection. auth-conf Authentication with integrity and privacy protection. 4.3.3. SASL policies SASL policies provide fine-grain control over Hot Rod authentication mechanisms. Tip Data Grid cache authorization restricts access to caches based on roles and permissions. Configure cache authorization and then set <no-anonymous value=false /> to allow anonymous login and delegate access logic to cache authorization. Policy Description Default value forward-secrecy Use only SASL mechanisms that support forward secrecy between sessions. This means that breaking into one session does not automatically provide information for breaking into future sessions. false pass-credentials Use only SASL mechanisms that require client credentials. false no-plain-text Do not use SASL mechanisms that are susceptible to simple plain passive attacks. false no-active Do not use SASL mechanisms that are susceptible to active, non-dictionary, attacks. false no-dictionary Do not use SASL mechanisms that are susceptible to passive dictionary attacks. false no-anonymous Do not use SASL mechanisms that accept anonymous logins. true SASL policy configuration In the following configuration the Hot Rod endpoint uses the GSSAPI mechanism for authentication because it is the only mechanism that complies with all SASL policies: XML <server xmlns="urn:infinispan:server:14.0"> <endpoints> <endpoint socket-binding="default" security-realm="default"> <hotrod-connector> <authentication> <sasl mechanisms="PLAIN DIGEST-MD5 GSSAPI EXTERNAL" server-name="infinispan" qop="auth" policy="no-active no-plain-text"/> </authentication> </hotrod-connector> <rest-connector/> </endpoint> </endpoints> </server> JSON { "server": { "endpoints" : { "endpoint" : { "socket-binding" : "default", "security-realm" : "default", "hotrod-connector" : { "authentication" : { "sasl" : { "server-name" : "infinispan", "mechanisms" : [ "PLAIN","DIGEST-MD5","GSSAPI","EXTERNAL" ], "qop" : [ "auth" ], "policy" : [ "no-active","no-plain-text" ] } } }, "rest-connector" : "" } } } } YAML server: endpoints: endpoint: socketBinding: "default" securityRealm: "default" hotrodConnector: authentication: sasl: serverName: "infinispan" mechanisms: - "PLAIN" - "DIGEST-MD5" - "GSSAPI" - "EXTERNAL" qop: - "auth" policy: - "no-active" - "no-plain-text" restConnector: ~ 4.3.4. HTTP authentication mechanisms Data Grid Server supports the following HTTP authentication mechanisms with REST endpoints: Authentication mechanism Description Security realm type Related details BASIC Uses credentials in plain-text format. You should use BASIC authentication with encrypted connections only. Property realms and LDAP realms Corresponds to the Basic HTTP authentication scheme and is similar to the PLAIN SASL mechanism. DIGEST Uses hashing algorithms and nonce values. REST connectors support SHA-512 , SHA-256 and MD5 hashing algorithms. Property realms and LDAP realms Corresponds to the Digest HTTP authentication scheme and is similar to DIGEST-* SASL mechanisms. SPNEGO Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Corresponds to the Negotiate HTTP authentication scheme and is similar to the GSSAPI and GS2-KRB5 SASL mechanisms. BEARER_TOKEN Uses OAuth tokens and requires a token-realm configuration. Token realms Corresponds to the Bearer HTTP authentication scheme and is similar to OAUTHBEARER SASL mechanism. CLIENT_CERT Uses client certificates. Trust store realms Similar to the EXTERNAL SASL mechanism.
|
[
"<server xmlns=\"urn:infinispan:server:14.0\"> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"my-realm\"> <hotrod-connector> <authentication> <sasl mechanisms=\"SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN\" server-name=\"infinispan\" qop=\"auth\"/> </authentication> </hotrod-connector> <rest-connector> <authentication mechanisms=\"DIGEST BASIC\"/> </rest-connector> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"my-realm\", \"hotrod-connector\": { \"authentication\": { \"security-realm\": \"default\", \"sasl\": { \"server-name\": \"infinispan\", \"mechanisms\": [\"SCRAM-SHA-512\", \"SCRAM-SHA-384\", \"SCRAM-SHA-256\", \"SCRAM-SHA-1\", \"DIGEST-SHA-512\", \"DIGEST-SHA-384\", \"DIGEST-SHA-256\", \"DIGEST-SHA\", \"DIGEST-MD5\", \"PLAIN\"], \"qop\": [\"auth\"] } } }, \"rest-connector\": { \"authentication\": { \"mechanisms\": [\"DIGEST\", \"BASIC\"], \"security-realm\": \"default\" } } } } } }",
"server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"my-realm\" hotrodConnector: authentication: securityRealm: \"default\" sasl: serverName: \"infinispan\" mechanisms: - \"SCRAM-SHA-512\" - \"SCRAM-SHA-384\" - \"SCRAM-SHA-256\" - \"SCRAM-SHA-1\" - \"DIGEST-SHA-512\" - \"DIGEST-SHA-384\" - \"DIGEST-SHA-256\" - \"DIGEST-SHA\" - \"DIGEST-MD5\" - \"PLAIN\" qop: - \"auth\" restConnector: authentication: mechanisms: - \"DIGEST\" - \"BASIC\" securityRealm: \"default\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <endpoints socket-binding=\"default\"/> </server>",
"{ \"server\": { \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\" } } } }",
"server: endpoints: endpoint: socketBinding: \"default\"",
"<server xmlns=\"urn:infinispan:server:14.0\"> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"default\"> <hotrod-connector> <authentication> <sasl mechanisms=\"PLAIN DIGEST-MD5 GSSAPI EXTERNAL\" server-name=\"infinispan\" qop=\"auth\" policy=\"no-active no-plain-text\"/> </authentication> </hotrod-connector> <rest-connector/> </endpoint> </endpoints> </server>",
"{ \"server\": { \"endpoints\" : { \"endpoint\" : { \"socket-binding\" : \"default\", \"security-realm\" : \"default\", \"hotrod-connector\" : { \"authentication\" : { \"sasl\" : { \"server-name\" : \"infinispan\", \"mechanisms\" : [ \"PLAIN\",\"DIGEST-MD5\",\"GSSAPI\",\"EXTERNAL\" ], \"qop\" : [ \"auth\" ], \"policy\" : [ \"no-active\",\"no-plain-text\" ] } } }, \"rest-connector\" : \"\" } } } }",
"server: endpoints: endpoint: socketBinding: \"default\" securityRealm: \"default\" hotrodConnector: authentication: sasl: serverName: \"infinispan\" mechanisms: - \"PLAIN\" - \"DIGEST-MD5\" - \"GSSAPI\" - \"EXTERNAL\" qop: - \"auth\" policy: - \"no-active\" - \"no-plain-text\" restConnector: ~"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/authentication-mechanisms
|
3.7.4. Additional Information
|
3.7.4. Additional Information For more information about TLS configuration and related topics, see the resources listed below. Installed Documentation config (1) - Describes the format of the /etc/ssl/openssl.conf configuration file. ciphers (1) - Includes a list of available OpenSSL keywords and cipher strings. /usr/share/httpd/manual/mod/mod_ssl.html - Contains detailed descriptions of the directives available in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/httpd/manual/ssl/ssl_howto.html - Contains practical examples of real-world settings in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . Online Documentation Red Hat Enterprise Linux 6 Security-Enhanced Linux - The Security-Enhanced Linux guide for Red Hat Enterprise Linux 6 describes the basic principles of SELinux . http://tools.ietf.org/html/draft-ietf-uta-tls-bcp-00 - Recommendations for secure use of TLS and DTLS .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-additional_information
|
5.175. lsof
|
5.175. lsof 5.175.1. RHBA-2012:0442 - lsof bug fix update An updated lsof package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The lsof (LiSt Open Files) package provides a utility to list information about files that are open and running on Linux and UNIX systems. Bug Fixes BZ# 747375 Previously, only the first "+e" or "-e" option was processed and the rest were ignored. Consequently it was not possible to exclude more than one file system from being subjected to kernel function calls. This update fixes this issue and lsof now functions as expected with multiple +e or -e options. BZ# 795799 Prior to this update, the lsof utility ignored the "-w" option if both the "-b" and the "-w" options were specified. As a consequence, lsof failed to suppress warning messages. Now, the -w option successfully suppresses warning messages. All users of lsof are advised to upgrade to this updated package, which fixes these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/lsof
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_microsoft_azure/making-open-source-more-inclusive
|
Chapter 1. Overview
|
Chapter 1. Overview The atomic command-line tool provides a way to interact and manage Atomic Host systems and containers. It provides a high level, coherent entrypoint to the system and makes it easier to interact with special kinds of containers, such as super-privileged containers, and debugging tools. The atomic command uses tools such as docker , ostree and skopeo to manage containers and container host systems. There are also a lot of features built into the atomic command that are not available in the docker command. These features allow you to use special commands for image signing, image verification, the ability to install a container - mounting file systems and opening privileges. Understanding LABELs : Dockerfiles support storing default values for some commands that atomic can read and execute. These are called "LABEL" instructions and they make it easy to ship images with their own suggested values, and simplifies running complex docker commands. For example, if a Dockerfile contains the LABEL RUN , running atomic run <image> executes its contents. The commands in atomic that use labels are install , uninstall , mount , unmount , run , and stop .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/cli_reference/overview
|
2.3. The /proc Virtual File System
|
2.3. The /proc Virtual File System Unlike most file systems, /proc contains neither text nor binary files. Because it houses virtual files , the /proc is referred to as a virtual file system. These virtual files are typically zero bytes in size, even if they contain a large amount of information. The /proc file system is not used for storage. Its main purpose is to provide a file-based interface to hardware, memory, running processes, and other system components. Real-time information can be retrieved on many system components by viewing the corresponding /proc file. Some of the files within /proc can also be manipulated (by both users and applications) to configure the kernel. The following /proc files are relevant in managing and monitoring system storage: /proc/devices Displays various character and block devices that are currently configured. /proc/filesystems Lists all file system types currently supported by the kernel. /proc/mdstat Contains current information on multiple-disk or RAID configurations on the system, if they exist. /proc/mounts Lists all mounts currently used by the system. /proc/partitions Contains partition block allocation information. For more information about the /proc file system, see the Red Hat Enterprise Linux 7 Deployment Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/proc-virt-fs
|
Chapter 5. Designing the application logic for a Red Hat build of Kogito microservice using DMN
|
Chapter 5. Designing the application logic for a Red Hat build of Kogito microservice using DMN After you create your project, you can create or import Decision Model and Notation (DMN) decision models and Drools Rule Language (DRL) business rules in the src/main/resources folder of your project. You can also include Java classes in the src/main/java folder of your project that act as Java services or provide implementations that you call from your decisions. The example for this procedure is a basic Red Hat build of Kogito microservice that provides a REST endpoint /persons . This endpoint is automatically generated based on an example PersonDecisions.dmn DMN model to make decisions based on the data being processed. The business decision contains the decision logic of the Red Hat Decision Manager service. You can define business rules and decisions in different ways, such as with DMN models or DRL rules. The example for this procedure uses a DMN model. Prerequisites You have created a project. For more information about creating a Maven project, see Chapter 3, Creating a Maven project for a Red Hat build of Kogito microservice . Procedure In the Maven project that you generated for your Red Hat Decision Manager service, navigate to the src/main/java/org/acme folder and add the following Person.java file: Example person Java object package org.acme; import java.io.Serializable; public class Person { private String name; private int age; private boolean adult; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public boolean isAdult() { return adult; } public void setAdult(boolean adult) { this.adult = adult; } @Override public String toString() { return "Person [name=" + name + ", age=" + age + ", adult=" + adult + "]"; } } This example Java object sets and retrieves a person's name, age, and adult status. Navigate to the src/main/resources folder and add the following PersonDecisions.dmn DMN decision model: Figure 5.1. Example PersonDecisions DMN decision requirements diagram (DRD) Figure 5.2. Example DMN boxed expression for isAdult decision Figure 5.3. Example DMN data types This example DMN model consists of a basic DMN input node and a decision node defined by a DMN decision table with a custom structured data type. In VS Code, you can add the Red Hat Business Automation Bundle VS Code extension to design the decision requirements diagram (DRD), boxed expression, and data types with the DMN modeler. To create this example DMN model quickly, you can copy the following PersonDecisions.dmn file content: Example DMN file <dmn:definitions xmlns:dmn="http://www.omg.org/spec/DMN/20180521/MODEL/" xmlns="https://kiegroup.org/dmn/_52CEF9FD-9943-4A89-96D5-6F66810CA4C1" xmlns:di="http://www.omg.org/spec/DMN/20180521/DI/" xmlns:kie="http://www.drools.org/kie/dmn/1.2" xmlns:dmndi="http://www.omg.org/spec/DMN/20180521/DMNDI/" xmlns:dc="http://www.omg.org/spec/DMN/20180521/DC/" xmlns:feel="http://www.omg.org/spec/DMN/20180521/FEEL/" id="_84B432F5-87E7-43B1-9101-1BAFE3D18FC5" name="PersonDecisions" typeLanguage="http://www.omg.org/spec/DMN/20180521/FEEL/" namespace="https://kiegroup.org/dmn/_52CEF9FD-9943-4A89-96D5-6F66810CA4C1"> <dmn:extensionElements/> <dmn:itemDefinition id="_DEF2C3A7-F3A9-4ABA-8D0A-C823E4EB43AB" name="tPerson" isCollection="false"> <dmn:itemComponent id="_DB46DB27-0752-433F-ABE3-FC9E3BDECC97" name="Age" isCollection="false"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_8C6D865F-E9C8-43B0-AB4D-3F2075A4ECA6" name="Name" isCollection="false"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id="_9033704B-4E1C-42D3-AC5E-0D94107303A1" name="Adult" isCollection="false"> <dmn:typeRef>boolean</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id="_F9685B74-0C69-4982-B3B6-B04A14D79EDB" name="Person"> <dmn:extensionElements/> <dmn:variable id="_0E345A3C-BB1F-4FB2-B00F-C5691FD1D36C" name="Person" typeRef="tPerson"/> </dmn:inputData> <dmn:decision id="_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852" name="isAdult"> <dmn:extensionElements/> <dmn:variable id="_54CD509F-452F-40E5-941C-AFB2667D4D45" name="isAdult" typeRef="boolean"/> <dmn:informationRequirement id="_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75"> <dmn:requiredInput href="#_F9685B74-0C69-4982-B3B6-B04A14D79EDB"/> </dmn:informationRequirement> <dmn:decisionTable id="_58370567-05DE-4EC0-AC2D-A23803C1EAAE" hitPolicy="UNIQUE" preferredOrientation="Rule-as-Row"> <dmn:input id="_ADEF36CD-286A-454A-ABD8-9CF96014021B"> <dmn:inputExpression id="_4930C2E5-7401-46DD-8329-EAC523BFA492" typeRef="number"> <dmn:text>Person.Age</dmn:text> </dmn:inputExpression> </dmn:input> <dmn:output id="_9867E9A3-CBF6-4D66-9804-D2206F6B4F86" typeRef="boolean"/> <dmn:rule id="_59D6BFF0-35B4-4B7E-8D7B-E31CB0DB8242"> <dmn:inputEntry id="_7DC55D63-234F-497B-A12A-93DA358C0136"> <dmn:text>> 18</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="_B3BB5B97-05B9-464A-AB39-58A33A9C7C00"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id="_8FCD63FE-8AD8-4F56-AD12-923E87AFD1B1"> <dmn:inputEntry id="_B4EF7F13-E486-46CB-B14E-1D21647258D9"> <dmn:text><= 18</dmn:text> </dmn:inputEntry> <dmn:outputEntry id="_F3A9EC8E-A96B-42A0-BF87-9FB1F2FDB15A"> <dmn:text>false</dmn:text> </dmn:outputEntry> </dmn:rule> </dmn:decisionTable> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension> <kie:ComponentsWidthsExtension> <kie:ComponentWidths dmnElementRef="_58370567-05DE-4EC0-AC2D-A23803C1EAAE"> <kie:width>50</kie:width> <kie:width>100</kie:width> <kie:width>100</kie:width> <kie:width>100</kie:width> </kie:ComponentWidths> </kie:ComponentsWidthsExtension> </di:extension> <dmndi:DMNShape id="dmnshape-_F9685B74-0C69-4982-B3B6-B04A14D79EDB" dmnElementRef="_F9685B74-0C69-4982-B3B6-B04A14D79EDB" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="404" y="464" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id="dmnshape-_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852" dmnElementRef="_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852" isCollapsed="false"> <dmndi:DMNStyle> <dmndi:FillColor red="255" green="255" blue="255"/> <dmndi:StrokeColor red="0" green="0" blue="0"/> <dmndi:FontColor red="0" green="0" blue="0"/> </dmndi:DMNStyle> <dc:Bounds x="404" y="311" width="100" height="50"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id="dmnedge-_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75" dmnElementRef="_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75"> <di:waypoint x="504" y="489"/> <di:waypoint x="404" y="336"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI> </dmn:definitions> To create this example DMN model in VS Code using the DMN modeler, follow these steps: Open the empty PersonDecisions.dmn file, click the Properties icon in the upper-right corner of the DMN modeler, and confirm that the DMN model Name is set to PersonDecisions . In the left palette, select DMN Input Data , drag the node to the canvas, and double-click the node to name it Person . In the left palette, drag the DMN Decision node to the canvas, double-click the node to name it isAdult , and link to it from the input node. Select the decision node to display the node options and click the Edit icon to open the DMN boxed expression editor to define the decision logic for the node. Click the undefined expression field and select Decision Table . Click the upper-left corner of the decision table to set the hit policy to Unique . Set the input and output columns so that the input source Person.Age with type number determines the age limit and the output target isAdult with type boolean determines adult status: Figure 5.4. Example DMN decision table for isAdult decision In the upper tab options, select the Data Types tab and add the following tPerson structured data type and nested data types: Figure 5.5. Example DMN data types After you define the data types, select the Editor tab to return to the DMN modeler canvas. Select the Person input node, click the Properties icon, and under Information item , set the Data type to tPerson . Select the isAdult decision node, click the Properties icon, and under Information item , confirm that the Data type is still set to boolean . You previously set this data type when you created the decision table. Save the DMN decision file. 5.1. Using DRL rule units as an alternative decision service You can also use a Drools Rule Language (DRL) file implemented as a rule unit to define this example decision service, as an alternative to using Decision Model and Notation (DMN). A DRL rule unit is a module for rules and a unit of execution. A rule unit collects a set of rules with the declaration of the type of facts that the rules act on. A rule unit also serves as a unique namespace for each group of rules. A single rule base can contain multiple rule units. You typically store all the rules for a unit in the same file as the unit declaration so that the unit is self-contained. For more information about rule units, see Designing a decision service using DRL rules . Prerequisites You have created a project. For more information about creating a Maven project, see Chapter 3, Creating a Maven project for a Red Hat build of Kogito microservice . Procedure In the src/main/resources folder of your example project, instead of using a DMN file, add the following PersonRules.drl file: Example PersonRules DRL file This example rule determines that any person who is older than 18 is classified as an adult. The rule file also declares that the rule belongs to the rule unit PersonRules . When you build the project, the rule unit is generated and associated with the DRL file. The rule also defines the condition using OOPath notation. OOPath is an object-oriented syntax extension to XPath for navigating through related elements while handling collections and filtering constraints. You can also rewrite the same rule condition in a more explicit form using the traditional rule pattern syntax, as shown in the following example: Example PersonRules DRL file using traditional notation
|
[
"package org.acme; import java.io.Serializable; public class Person { private String name; private int age; private boolean adult; public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public boolean isAdult() { return adult; } public void setAdult(boolean adult) { this.adult = adult; } @Override public String toString() { return \"Person [name=\" + name + \", age=\" + age + \", adult=\" + adult + \"]\"; } }",
"<dmn:definitions xmlns:dmn=\"http://www.omg.org/spec/DMN/20180521/MODEL/\" xmlns=\"https://kiegroup.org/dmn/_52CEF9FD-9943-4A89-96D5-6F66810CA4C1\" xmlns:di=\"http://www.omg.org/spec/DMN/20180521/DI/\" xmlns:kie=\"http://www.drools.org/kie/dmn/1.2\" xmlns:dmndi=\"http://www.omg.org/spec/DMN/20180521/DMNDI/\" xmlns:dc=\"http://www.omg.org/spec/DMN/20180521/DC/\" xmlns:feel=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" id=\"_84B432F5-87E7-43B1-9101-1BAFE3D18FC5\" name=\"PersonDecisions\" typeLanguage=\"http://www.omg.org/spec/DMN/20180521/FEEL/\" namespace=\"https://kiegroup.org/dmn/_52CEF9FD-9943-4A89-96D5-6F66810CA4C1\"> <dmn:extensionElements/> <dmn:itemDefinition id=\"_DEF2C3A7-F3A9-4ABA-8D0A-C823E4EB43AB\" name=\"tPerson\" isCollection=\"false\"> <dmn:itemComponent id=\"_DB46DB27-0752-433F-ABE3-FC9E3BDECC97\" name=\"Age\" isCollection=\"false\"> <dmn:typeRef>number</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_8C6D865F-E9C8-43B0-AB4D-3F2075A4ECA6\" name=\"Name\" isCollection=\"false\"> <dmn:typeRef>string</dmn:typeRef> </dmn:itemComponent> <dmn:itemComponent id=\"_9033704B-4E1C-42D3-AC5E-0D94107303A1\" name=\"Adult\" isCollection=\"false\"> <dmn:typeRef>boolean</dmn:typeRef> </dmn:itemComponent> </dmn:itemDefinition> <dmn:inputData id=\"_F9685B74-0C69-4982-B3B6-B04A14D79EDB\" name=\"Person\"> <dmn:extensionElements/> <dmn:variable id=\"_0E345A3C-BB1F-4FB2-B00F-C5691FD1D36C\" name=\"Person\" typeRef=\"tPerson\"/> </dmn:inputData> <dmn:decision id=\"_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852\" name=\"isAdult\"> <dmn:extensionElements/> <dmn:variable id=\"_54CD509F-452F-40E5-941C-AFB2667D4D45\" name=\"isAdult\" typeRef=\"boolean\"/> <dmn:informationRequirement id=\"_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75\"> <dmn:requiredInput href=\"#_F9685B74-0C69-4982-B3B6-B04A14D79EDB\"/> </dmn:informationRequirement> <dmn:decisionTable id=\"_58370567-05DE-4EC0-AC2D-A23803C1EAAE\" hitPolicy=\"UNIQUE\" preferredOrientation=\"Rule-as-Row\"> <dmn:input id=\"_ADEF36CD-286A-454A-ABD8-9CF96014021B\"> <dmn:inputExpression id=\"_4930C2E5-7401-46DD-8329-EAC523BFA492\" typeRef=\"number\"> <dmn:text>Person.Age</dmn:text> </dmn:inputExpression> </dmn:input> <dmn:output id=\"_9867E9A3-CBF6-4D66-9804-D2206F6B4F86\" typeRef=\"boolean\"/> <dmn:rule id=\"_59D6BFF0-35B4-4B7E-8D7B-E31CB0DB8242\"> <dmn:inputEntry id=\"_7DC55D63-234F-497B-A12A-93DA358C0136\"> <dmn:text>> 18</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"_B3BB5B97-05B9-464A-AB39-58A33A9C7C00\"> <dmn:text>true</dmn:text> </dmn:outputEntry> </dmn:rule> <dmn:rule id=\"_8FCD63FE-8AD8-4F56-AD12-923E87AFD1B1\"> <dmn:inputEntry id=\"_B4EF7F13-E486-46CB-B14E-1D21647258D9\"> <dmn:text><= 18</dmn:text> </dmn:inputEntry> <dmn:outputEntry id=\"_F3A9EC8E-A96B-42A0-BF87-9FB1F2FDB15A\"> <dmn:text>false</dmn:text> </dmn:outputEntry> </dmn:rule> </dmn:decisionTable> </dmn:decision> <dmndi:DMNDI> <dmndi:DMNDiagram> <di:extension> <kie:ComponentsWidthsExtension> <kie:ComponentWidths dmnElementRef=\"_58370567-05DE-4EC0-AC2D-A23803C1EAAE\"> <kie:width>50</kie:width> <kie:width>100</kie:width> <kie:width>100</kie:width> <kie:width>100</kie:width> </kie:ComponentWidths> </kie:ComponentsWidthsExtension> </di:extension> <dmndi:DMNShape id=\"dmnshape-_F9685B74-0C69-4982-B3B6-B04A14D79EDB\" dmnElementRef=\"_F9685B74-0C69-4982-B3B6-B04A14D79EDB\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"404\" y=\"464\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNShape id=\"dmnshape-_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852\" dmnElementRef=\"_0D2BD7A9-ACA1-49BE-97AD-19699E0C9852\" isCollapsed=\"false\"> <dmndi:DMNStyle> <dmndi:FillColor red=\"255\" green=\"255\" blue=\"255\"/> <dmndi:StrokeColor red=\"0\" green=\"0\" blue=\"0\"/> <dmndi:FontColor red=\"0\" green=\"0\" blue=\"0\"/> </dmndi:DMNStyle> <dc:Bounds x=\"404\" y=\"311\" width=\"100\" height=\"50\"/> <dmndi:DMNLabel/> </dmndi:DMNShape> <dmndi:DMNEdge id=\"dmnedge-_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75\" dmnElementRef=\"_2F819B03-36B7-4DEB-AED6-2B46AE3ADB75\"> <di:waypoint x=\"504\" y=\"489\"/> <di:waypoint x=\"404\" y=\"336\"/> </dmndi:DMNEdge> </dmndi:DMNDiagram> </dmndi:DMNDI> </dmn:definitions>",
"package org.acme unit PersonRules; import org.acme.Person; rule isAdult when USDperson: /person[ age > 18 ] then modify(USDperson) { setAdult(true) }; end query persons USDp : /person[ adult ] end",
"package org.acme unit PersonRules; import org.acme.Person; rule isAdult when USDperson: Person(age > 18) from person then modify(USDperson) { setAdult(true) }; end query persons USDp : /person[ adult ] end"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/proc-kogito-designing-app-dmn_getting-started-kogito-microservices
|
Chapter 3. Using Fence Agents Remediation
|
Chapter 3. Using Fence Agents Remediation You can use the Fence Agents Remediation Operator to automatically remediate unhealthy nodes, similar to the Self Node Remediation Operator. FAR is designed to run an existing set of upstream fencing agents on environments with a traditional API end-point, for example, IPMI, for power cycling cluster nodes, while their pods are quickly evicted based on the remediation strategy . 3.1. About the Fence Agents Remediation Operator The Fence Agents Remediation (FAR) Operator uses external tools to fence unhealthy nodes. These tools are a set of fence agents, where each fence agent can be used for different environments to fence a node, and using a traditional Application Programming Interface (API) call that reboots a node. By doing so, FAR can minimize downtime for stateful applications, restores compute capacity if transient failures occur, and increases the availability of workloads. FAR not only fences a node when it becomes unhealthy, it also tries to remediate the node from being unhealthy to healthy. It adds a taint to evict stateless pods, fences the node with a fence agent, and after a reboot, it completes the remediation with resource deletion to remove any remaining workloads (mostly stateful workloads). Adding the taint and deleting the workloads accelerates the workload rescheduling. The Operator watches for new or deleted custom resources (CRs) called FenceAgentsRemediation which trigger a fence agent to remediate a node, based on the CR's name. FAR uses the NodeHealthCheck controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the NodeHealthCheck resource creates the FenceAgentsRemediation CR, based on the FenceAgentsRemediationTemplate CR, which then triggers the Fence Agents Remediation Operator. FAR uses a fence agent to fence a Kubernetes node. Generally, fencing is the process of taking unresponsive/unhealthy computers into a safe state, and isolating the computer. Fence agent is a software code that uses a management interface to perform fencing, mostly power-based fencing which enables power-cycling, reset, or turning off the computer. An example fence agent is fence_ipmilan which is used for Intelligent Platform Management Interface (IPMI) environments. apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediation metadata: name: node-name 1 namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 2 1 The node-name should match the name of the unhealthy cluster node. 2 Specifies the remediation strategy for the nodes. For more information on the remediation strategies available, see the Understanding the Fence Agents Remediation Template configuration topic. The Operator includes a set of fence agents, that are also available in the Red Hat High Availability Add-On, which use a management interface, such as IPMI or an API, to provision/reboot a node for bare metal servers, virtual machines, and cloud platforms. 3.2. Installing the Fence Agents Remediation Operator by using the web console You can use the Red Hat OpenShift web console to install the Fence Agents Remediation Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, navigate to Operators OperatorHub . Select the Fence Agents Remediation Operator, or FAR, from the list of available Operators, and then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-workload-availability namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-workload-availability namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the log of the fence-agents-remediation-controller-manager pod for any reported issues. 3.3. Installing the Fence Agents Remediation Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Fence Agents Remediation Operator. You can install the Fence Agents Remediation Operator in your own namespace or in the openshift-workload-availability namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Fence Agents Remediation Operator: Define the Namespace CR and save the YAML file, for example, workload-availability-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability To create the Namespace CR, run the following command: USD oc create -f workload-availability-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability To create the OperatorGroup CR, run the following command: USD oc create -f workload-availability-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, fence-agents-remediation-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: fence-agents-remediation-subscription namespace: openshift-workload-availability 1 spec: channel: stable name: fence-agents-remediation source: redhat-operators sourceNamespace: openshift-marketplace package: fence-agents-remediation 1 Specify the Namespace where you want to install the Fence Agents Remediation Operator, for example, the openshift-workload-availability outlined earlier in this procedure. You can install the Subscription CR for the Fence Agents Remediation Operator in the openshift-workload-availability namespace where there is already a matching OperatorGroup CR. To create the Subscription CR, run the following command: USD oc create -f fence-agents-remediation-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-workload-availability Example output NAME DISPLAY VERSION REPLACES PHASE fence-agents-remediation.v0.3.0 Fence Agents Remediation Operator 0.3.0 fence-agents-remediation.v0.2.1 Succeeded Verify that the Fence Agents Remediation Operator is up and running: USD oc get deployment -n openshift-workload-availability Example output NAME READY UP-TO-DATE AVAILABLE AGE fence-agents-remediation-controller-manager 2/2 2 2 110m 3.4. Configuring the Fence Agents Remediation Operator You can use the Fence Agents Remediation Operator to create the FenceAgentsRemediationTemplate Custom Resource (CR), which is used by the Node Health Check Operator (NHC). This CR defines the fence agent to be used in the cluster with all the required parameters for remediating the nodes. There may be many FenceAgentsRemediationTemplate CRs, at most one for each fence agent, and when NHC is being used it can choose the FenceAgentsRemediationTemplate as the remediationTemplate to be used for power-cycling the node. The FenceAgentsRemediationTemplate CR resembles the following YAML file: apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-template-fence-ipmilan namespace: openshift-workload-availability spec: template: spec: agent: fence_ipmilan 1 nodeparameters: 2 --ipport: master-0-0: '6230' master-0-1: '6231' master-0-2: '6232' worker-0-0: '6233' worker-0-1: '6234' worker-0-2: '6235' sharedparameters: 3 '--action': reboot '--ip': 192.168.123.1 '--lanplus': '' '--password': password '--username': admin retryCount: '5' 4 retryInterval: '5' 5 timeout: '60' 6 1 Displays the name of the fence agent to be executed, for example, fence_ipmilan . 2 Displays the node-specific parameters for executing the fence agent, for example, ipport . 3 Displays the cluster-wide parameters for executing the fence agent, for example, username . 4 Displays the number of times to retry the fence agent command in case of failure. The default number of attempts is 5. 5 Displays the interval between retries in seconds. The default is 5 seconds. 6 Displays the timeout for the fence agent command in seconds. The default is 60 seconds. 3.4.1. Understanding the Fence Agents Remediation Template configuration The Fence Agents Remediation Operator also creates the FenceAgentsRemediationTemplate Custom Resource Definition (CRD). This CRD defines the remediation strategy for the nodes that is aimed to recover workloads faster. The following remediation strategies are available: ResourceDeletion This remediation strategy removes the pods on the node. This strategy recovers workloads faster. OutOfServiceTaint This remediation strategy implicitly causes the removal of the pods and associated volume attachments on the node. It achieves this by placing the OutOfServiceTaint taint on the node. The OutOfServiceTaint strategy also represents a non-graceful node shutdown. A non-graceful node shutdown occurs when a node is shut down and not detected, instead of triggering an in-operating system shutdown. This strategy has been supported on technology preview since OpenShift Container Platform version 4.13, and on general availability since OpenShift Container Platform version 4.15. The FenceAgentsRemediationTemplate CR resembles the following YAML file: apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2 1 Specifies the type of remediation template based on the remediation strategy. Replace <remediation_object> with either resource or taint ; for example, fence-agents-remediation-resource-deletion-template . 2 Specifies the remediation strategy. The remediation strategy can either be ResourceDeletion or OutOfServiceTaint . 3.5. Troubleshooting the Fence Agents Remediation Operator 3.5.1. General troubleshooting Issue You want to troubleshoot issues with the Fence Agents Remediation Operator. Resolution Check the Operator logs. USD oc logs <fence-agents-remediation-controller-manager-name> -c manager -n <namespace-name> 3.5.2. Unsuccessful remediation Issue An unhealthy node was not remediated. Resolution Verify that the FenceAgentsRemediation CR was created by running the following command: USD oc get far -A If the NodeHealthCheck controller did not create the FenceAgentsRemediation CR when the node turned unhealthy, check the logs of the NodeHealthCheck controller. Additionally, ensure that the NodeHealthCheck CR includes the required specification to use the remediation template. If the FenceAgentsRemediation CR was created, ensure that its name matches the unhealthy node object. 3.5.3. Fence Agents Remediation Operator resources exist after uninstalling the Operator Issue The Fence Agents Remediation Operator resources, such as the remediation CR and the remediation template CR, exist after uninstalling the Operator. Resolution To remove the Fence Agents Remediation Operator resources, you can delete the resources by selecting the "Delete all operand instances for this operator" checkbox before uninstalling. This checkbox feature is only available in Red Hat OpenShift since version 4.13. For all versions of Red Hat OpenShift, you can delete the resources by running the following relevant command for each resource type: USD oc delete far <fence-agents-remediation> -n <namespace> USD oc delete fartemplate <fence-agents-remediation-template> -n <namespace> The remediation CR far must be created and deleted by the same entity, for example, NHC. If the remediation CR far is still present, it is deleted, together with the FAR operator. The remediation template CR fartemplate only exists if you use FAR with NHC. When the FAR operator is deleted using the web console, the remediation template CR fartemplate is also deleted. 3.6. Gathering data about the Fence Agents Remediation Operator To collect debugging information about the Fence Agents Remediation Operator, use the must-gather tool. For information about the must-gather image for the Fence Agents Remediation Operator, see Gathering data about specific features . 3.7. Agents supported by the Fence Agents Remediation Operator This section describes the agents currently supported by the Fence Agents Remediation Operator. Most of the supported agents can be grouped by the node's hardware proprietary and usage, as follows: BareMetal Virtualization Intel HP IBM VMware Cisco APC Dell Other Table 3.1. BareMetal - Using the Redfish management interface is recommended, unless it is not supported. Agent Description fence_redfish An I/O Fencing agent that can be used with Out-of-Band controllers that support Redfish APIs. fence_ipmilan [a] An I/O Fencing agent that can be used with machines controlled by IPMI . [a] This description also applies for the agents fence_ilo3 , fence_ilo4 , fence_ilo5 , fence_imm , fence_idrac , and fence_ipmilanplus . Table 3.2. Virtualization Agent Description fence_rhevm An I/O Fencing agent that can be used with RHEV-M REST API to fence virtual machines. Table 3.3. Intel Agent Description fence_amt_ws An I/O Fencing agent that can be used with Intel AMT (WS). fence_intelmodular An I/O Fencing agent that can be used with Intel Modular device (tested on Intel MFSYS25, should also work with MFSYS35). Table 3.4. HP - agents for the iLO management interface or BladeSystem. Agent Description fence_ilo [a] An I/O Fencing agent that can be used for HP servers with the Integrated Light Out ( iLO ) PCI card. [a] This description also applies for the agent fence_ilo2 . Table 3.5. IBM Agent Description fence_ibmblade An I/O Fencing agent that can be used with IBM BladeCenter chassis. fence_ipdu An I/O Fencing agent that can be used with the IBM iPDU network power switch. Table 3.6. VMware Agent Description fence_vmware_rest An I/O Fencing agent that can be used with VMware API to fence virtual machines. fence_vmware_soap An I/O Fencing agent that can be used with the virtual machines managed by VMWare products that have SOAP API v4.1+. Table 3.7. Cisco Agent Description fence_cisco_mds An I/O Fencing agent that can be used with any Cisco MDS 9000 series with SNMP enabled device. fence_cisco_ucs An I/O Fencing agent that can be used with Cisco UCS to fence machines. Table 3.8. APC Agent Description fence_apc_snmp [a] An I/O Fencing agent that can be used with the APC network power switch or Tripplite PDU devices. [a] This description also applies for the fence_tripplite_snmp agent. Table 3.9. Other - agents for usage not listed in the tables. Agent Description fence_compute A resource that can be used to tell Nova that compute nodes are down and to reschedule flagged instances. fence_eaton_snmp An I/O Fencing agent that can be used with the Eaton network power switch. fence_emerson An I/O Fencing agent that can be used with MPX and MPH2 managed rack PDU. fence_epsr2 [a] An I/O Fencing agent that can be used with the ePowerSwitch 8M+ power switch to fence connected machines. fence_evacuate A resource that can be used to reschedule flagged instances. fence_heuristics_ping A resource that can be used with ping-heuristics to control execution of another fence agent on the same fencing level. fence_ifmib An I/O Fencing agent that can be used with any SNMP IF-MIB capable device. fence_kdump An I/O Fencing agent that can be used with the kdump crash recovery service. fence_mpath An I/O Fencing agent that can be used with SCSI-3 persistent reservations to control access multipath devices. fence_sbd An I/O Fencing agent that can be used in environments where sbd can be used (shared storage). fence_scsi An I/O Fencing agent that can be used with SCSI-3 persistent reservations to control access to shared storage devices. [a] This description also applies for the fence_eps agent. 3.8. Additional resources Using Operator Lifecycle Manager on restricted networks . Deleting Operators from a cluster
|
[
"apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediation metadata: name: node-name 1 namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 2",
"apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability",
"oc create -f workload-availability-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability",
"oc create -f workload-availability-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: fence-agents-remediation-subscription namespace: openshift-workload-availability 1 spec: channel: stable name: fence-agents-remediation source: redhat-operators sourceNamespace: openshift-marketplace package: fence-agents-remediation",
"oc create -f fence-agents-remediation-subscription.yaml",
"oc get csv -n openshift-workload-availability",
"NAME DISPLAY VERSION REPLACES PHASE fence-agents-remediation.v0.3.0 Fence Agents Remediation Operator 0.3.0 fence-agents-remediation.v0.2.1 Succeeded",
"oc get deployment -n openshift-workload-availability",
"NAME READY UP-TO-DATE AVAILABLE AGE fence-agents-remediation-controller-manager 2/2 2 2 110m",
"apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-template-fence-ipmilan namespace: openshift-workload-availability spec: template: spec: agent: fence_ipmilan 1 nodeparameters: 2 --ipport: master-0-0: '6230' master-0-1: '6231' master-0-2: '6232' worker-0-0: '6233' worker-0-1: '6234' worker-0-2: '6235' sharedparameters: 3 '--action': reboot '--ip': 192.168.123.1 '--lanplus': '' '--password': password '--username': admin retryCount: '5' 4 retryInterval: '5' 5 timeout: '60' 6",
"apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2",
"oc logs <fence-agents-remediation-controller-manager-name> -c manager -n <namespace-name>",
"oc get far -A",
"oc delete far <fence-agents-remediation> -n <namespace>",
"oc delete fartemplate <fence-agents-remediation-template> -n <namespace>"
] |
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/fence-agents-remediation-operator-remediate-nodes
|
Red Hat JBoss EAP XP 4.0.0 Release Notes
|
Red Hat JBoss EAP XP 4.0.0 Release Notes Red Hat JBoss Enterprise Application Platform 7.4 For Use with JBoss EAP XP 4.0.0 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_4.0.0_release_notes/index
|
13.4. Disabling Command-Line Access
|
13.4. Disabling Command-Line Access To disable command-line access for your desktop user, you need to make configuration changes in a number of different contexts. Bear in mind that the following steps do not remove the desktop user's permissions to access a command line, but rather remove the ways that the desktop user could access command line. Set the org.gnome.desktop.lockdown.disable-command-line GSettings key, which prevents the user from accessing the terminal or specifying a command line to be executed (the Alt + F2 command prompt). Disable switching to virtual terminals (VTs) with the Ctrl + Alt + function key shortcuts by modifying the X server configuration. Remove Terminal and any other application that provides access to the terminal from the Applications menu and Activities Overview in GNOME Shell. This is done by removing menu items for those applications. For detailed information on how to remove a menu item, see Section 12.1.2, "Removing a Menu Item for All Users" . 13.4.1. Setting the org.gnome.desktop.lockdown.disable-command-line Key Create a local database for machine-wide settings in /etc/dconf/db/local.d/00-lockdown : Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases: Users must log out and back in again before the system-wide settings take effect. 13.4.2. Disabling Virtual Terminal Switching Users can normally use the Ctrl + Alt + function key shortcuts (for example Ctrl + Alt + F2 ) to switch from the GNOME Desktop and X server to a virtual terminal. You can disable access to all virtual terminals by adding a DontVTSwitch option to the Serverflags section in an X configuration file in the /etc/X11/xorg.conf.d/ directory. Procedure 13.4. Disabling Access to Virtual Terminals Create or edit an X configuration file in the /etc/X11/xorg.conf.d/ directory: Note By convention, these host-specific configuration file names start with two digits and a hyphen and always have the .conf extension. Thus, the following file name can be /etc/X11/xorg.conf.d/10-xorg.conf . Section "Serverflags" Option "DontVTSwitch" "yes" EndSection Restart the X server for your changes to take effect.
|
[
"Disable command-line access disable-command-line=true",
"Lock the disabled command-line access /org/gnome/desktop/lockdown",
"dconf update",
"Section \"Serverflags\" Option \"DontVTSwitch\" \"yes\" EndSection"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/disable-command-line-access
|
Chapter 2. Understanding disconnected installation mirroring
|
Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image"
|
[
"oc adm release mirror",
"To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release",
"spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\""
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring
|
Preface
|
Preface You can install Red Hat Developer Hub on Amazon Elastic Kubernetes Service (EKS) using one of the following methods: The Red Hat Developer Hub Operator The Red Hat Developer Hub Helm chart
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/pr01
|
Chapter 7. Creating pre-hardened images with RHEL image builder OpenSCAP integration
|
Chapter 7. Creating pre-hardened images with RHEL image builder OpenSCAP integration With the RHEL image builder on-premise support for the OpenSCAP integration, you can create customized blueprints with specific security profiles, and use the blueprints to build your pre-hardened images. You can then use this pre-hardened image to deploy systems that need to be compliant with a specific profile. You can add a set of packages or add-on files to customize your blueprints. With that, you can build a pre-hardened customized RHEL image ready to deploy compliant systems. During the image build process, an OSBuild oscap.remediation stage runs the OpenSCAP tool in the chroot environment, on the filesystem tree. The OpenSCAP tool runs the standard evaluation for the profile you choose and applies the remediations to the image. With this, you can build an image that can be configured according to the security profile requirements even before it boots for the first time. Red Hat provides regularly updated versions of the security hardening profiles that you can choose when you build your systems so that you can meet your current deployment guidelines. 7.1. The OpenSCAP blueprint customization With the OpenSCAP support for blueprint customization, you can generate blueprints from the`scap-security-guide` content for specific security profiles and then use the blueprints to build your own pre-hardened images. Creating a customized blueprint with OpenSCAP involves the following high level steps: Modify the mount points and configure the file system layout according to your specific requirements. In the blueprint, Select the OpenSCAP profile. This configures the image to trigger the remediation during the image build in accordance with the selected profile. Also during the image build, OpenSCAP applies a pre-first-boot remediation. To use the OpenSCAP blueprint customization in your image blueprints, you need to provide the following information: The data stream path to the datastream remediation instructions. The data stream files from scap-security-guide package are located in the /usr/share/xml/scap/ssg/content/ directory. The profile_id of the required security profile. The value of the profile_id field accepts both the long and short forms, for example, the following are acceptable: cis or xccdf_org.ssgproject.content_profile_cis . See SCAP Security Guide profiles supported in RHEL 9 for more details. The following example is a snippet with the OpenSCAP remediation stage: You can find more details about the SCAP source data stream from the scap-security-guide package, including the list of security profiles it provides, by using the command: For your convenience the OpenSCAP tool can generate the hardening blueprint for any profile available in scap-security-guide data streams. For example, the command: generates a blueprint for CIS profile similar to: Note Do not use this exact blueprint snippet for image hardening. It does not reflect a complete profile. As Red Hat constantly updates and refines security requirements for each profile in the scap-security-guide package, it makes sense to always re-generate the initial template using the most up-to-date version of the data stream provided for your system. Now you can customize the blueprint or use it as it is to build an image. RHEL image builder generates the necessary configurations for the osbuild stage based on your blueprint customization. Additionally, RHEL image builder adds two packages to the image: openscap-scanner - the OpenSCAP tool. scap-security-guide - the package which contains the remediation and evaluation instructions. Note The remediation stage uses the scap-security-guide package for the datastream because this package is installed on the image by default. If you want to use a different datastream, add the necessary package to the blueprint, and specify the path to the datastream in the oscap configuration. Additional resources SCAP Security Guide profiles supported in RHEL 9 7.2. Creating a pre-hardened image with RHEL image builder With the OpenSCAP and RHEL image builder integration, you can create images that are pre-hardened in compliance with a specific profile, and you can deploy them in a VM, or a bare-metal environment, for example. Prerequisite You are logged in as the root user or a user who is a member of the weldr group. The openscap and scap-security-guide packages are installed. Procedure Create a hardening blueprint in the TOML format, using OpenSCAP tool and scap-security-guide content, and modify it if necessary: Replace <profileID> with the profile ID with which the system should comply, for example, cis . Push the blueprint to osbuild-composer by using the composer-cli tool: Start the build of hardened image: Replace <image_type> with any image type, for example, qcow2 . After the image build is ready, you can use your pre-hardened image on your deployments. See Creating a virtual machine . Verification After you deploy your pre-hardened image you can perform a configuration compliance scan to verify that the image is aligned with the selected security profile. Important Performing a configuration compliance scanning does not guarantee the system is compliant. For more information, see Configuration compliance scanning . Additional resources Scanning the system for configuration compliance and vulnerabilities 7.3. Customizing a pre-hardened image with RHEL image builder You can customize a security profile by changing parameters in certain rules, for example, minimum password length, removing rules that you cover in a different way, and selecting additional rules, to implement internal policies. You cannot define new rules by customizing a profile. When you build an image from that blueprint, it creates a tailoring file with a new tailoring profile ID and saves it to the image as /usr/share/xml/osbuild-oscap-tailoring/tailoring.xml . The new profile ID adds the _osbuild_tailoring suffix to the base ID. For example, if you tailor the CIS ( cis ) base profile, the profile ID is xccdf_org.ssgproject.content_profile_cis_osbuild_tailoring . Prerequisites You are logged in as the root user or a user who is a member of the weldr group. The openscap and scap-security-guide packages are installed. Procedure Create a hardening blueprint in the TOML format from a selected profile: Append the tailoring file to the blueprint. The tailoring customization affects only the default state of the selected or unselected rules in the profile in which the customization is based on. It selects or unselects a rule in the profile, but does not change the state of other rules. Push the blueprint to osbuild-composer by using the composer-cli tool: Start the build of hardened image: Replace <image_type> with any image type, for example, qcow2 . After the image build is ready, use your pre-hardened image on your deployments. Verification After you deploy your pre-hardened image, you can perform a configuration compliance scan to verify that the image is aligned with the selected security profile. Important Performing a configuration compliance scanning does not guarantee the system is compliant. For more information, see Configuration compliance scanning . Additional resources Scanning the system for configuration compliance and vulnerabilities
|
[
"If you want to use the data stream from the 'scap-security-guide' package the 'datastream' key could be omitted. datastream = \"/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml\" profile_id = \"xccdf_org.ssgproject.content_profile_cis\"",
"oscap info /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml",
"oscap xccdf generate fix --profile=cis --fix-type=blueprint /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml",
"Blueprint for CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server # Profile Description: This profile defines a baseline that aligns to the \"Level 2 - Server\" configuration from the Center for Internet Security(R) Red Hat Enterprise Linux 9 BenchmarkTM, v3.0.0, released 2023-10-30. This profile includes Center for Internet Security(R) Red Hat Enterprise Linux 9 CIS BenchmarksTM content. # Profile ID: xccdf_org.ssgproject.content_profile_cis Benchmark ID: xccdf_org.ssgproject.content_benchmark_RHEL-9 Benchmark Version: 0.1.74 XCCDF Version: 1.2 name = \"hardened_xccdf_org.ssgproject.content_profile_cis\" description = \"CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server\" version = \"0.1.74\" [customizations.openscap] profile_id = \"xccdf_org.ssgproject.content_profile_cis\" If your hardening data stream is not part of the 'scap-security-guide' package provide the absolute path to it (from the root of the image filesystem). datastream = \"/usr/share/xml/scap/ssg/content/ssg-xxxxx-ds.xml\" [[customizations.filesystem]] mountpoint = \"/home\" size = 1073741824 [[customizations.filesystem]] mountpoint = \"/tmp\" size = 1073741824 [[customizations.filesystem]] mountpoint = \"/var\" size = 3221225472 [[customizations.filesystem]] mountpoint = \"/var/tmp\" size = 1073741824 [[packages]] name = \"aide\" version = \"*\" [[packages]] name = \"libselinux\" version = \"*\" [[packages]] name = \"audit\" version = \"*\" [customizations.kernel] append = \"audit_backlog_limit=8192 audit=1\" [customizations.services] enabled = [\"auditd\",\"crond\",\"firewalld\",\"systemd-journald\",\"rsyslog\"] disabled = [] masked = [\"nfs-server\",\"rpcbind\",\"autofs\",\"bluetooth\",\"nftables\"]",
"oscap xccdf generate fix --profile= <profileID> --fix-type= <blueprint_name> .toml /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml > cis.toml",
"composer-cli blueprints push <blueprint_name> .toml",
"composer-cli compose start <blueprint_name> <image_type>",
"oscap xccdf generate fix --profile= <profileID> --fix-type=blueprint /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml > <profileID>-tailored .toml",
"Blueprint for CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server [customizations.openscap.tailoring] selected = [ \"xccdf_org.ssgproject.content_bind_crypto_policy\" ] unselected = [ \"grub2_password\" ]",
"composer-cli blueprints push <blueprintProfileID> - tailored .toml",
"composer-cli compose start <blueprintProfileID> <image_type>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/assembly_creating-pre-hardened-images-with-image-builder-openscap-integration_composing-a-customized-rhel-system-image
|
Assessing and Reporting Malware Signatures on RHEL Systems with FedRAMP
|
Assessing and Reporting Malware Signatures on RHEL Systems with FedRAMP Red Hat Insights 1-latest Know when systems in your RHEL infrastructure are exposed to malware risks Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems_with_fedramp/index
|
Chapter 19. Configuring network plugins
|
Chapter 19. Configuring network plugins Director includes environment files that you can use when you configure third-party network plugins: 19.1. Fujitsu Converged Fabric (C-Fabric) You can enable the Fujitsu Converged Fabric (C-Fabric) plugin by using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml . Procedure Copy the environment file to your templates subdirectory: Edit the resource_registry to use an absolute path: Review the parameter_defaults in /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml : NeutronFujitsuCfabAddress - The telnet IP address of the C-Fabric. (string) NeutronFujitsuCfabUserName - The C-Fabric username to use. (string) NeutronFujitsuCfabPassword - The password of the C-Fabric user account. (string) NeutronFujitsuCfabPhysicalNetworks - List of <physical_network>:<vfab_id> tuples that specify physical_network names and their corresponding vfab IDs. (comma_delimited_list) NeutronFujitsuCfabSharePprofile - Determines whether to share a C-Fabric pprofile among neutron ports that use the same VLAN ID. (boolean) NeutronFujitsuCfabPprofilePrefix - The prefix string for pprofile name. (string) NeutronFujitsuCfabSaveConfig - Determines whether to save the configuration. (boolean) To apply the template to your deployment, include the environment file in the openstack overcloud deploy command: 19.2. Fujitsu FOS Switch You can enable the Fujitsu FOS Switch plugin by using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml . Procedure Copy the environment file to your templates subdirectory: Edit the resource_registry to use an absolute path: Review the parameter_defaults in /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml : NeutronFujitsuFosswIps - The IP addresses of all FOS switches. (comma_delimited_list) NeutronFujitsuFosswUserName - The FOS username to use. (string) NeutronFujitsuFosswPassword - The password of the FOS user account. (string) NeutronFujitsuFosswPort - The port number to use for the SSH connection. (number) NeutronFujitsuFosswTimeout - The timeout period of the SSH connection. (number) NeutronFujitsuFosswUdpDestPort - The port number of the VXLAN UDP destination on the FOS switches. (number) NeutronFujitsuFosswOvsdbVlanidRangeMin - The minimum VLAN ID in the range that is used for binding VNI and physical port. (number) NeutronFujitsuFosswOvsdbPort - The port number for the OVSDB server on the FOS switches. (number) To apply the template to your deployment, include the environment file in the openstack overcloud deploy command:
|
[
"cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml /home/stack/templates/",
"resource_registry: OS::TripleO::Services::NeutronML2FujitsuCfab: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-cfab.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml [OTHER OPTIONS]",
"cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml /home/stack/templates/",
"resource_registry: OS::TripleO::Services::NeutronML2FujitsuFossw: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-fossw.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml [OTHER OPTIONS]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_configuring-network-plugins
|
Chapter 12. Provisioning cloud instances on Red Hat OpenStack Platform
|
Chapter 12. Provisioning cloud instances on Red Hat OpenStack Platform Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads. In Satellite, you can interact with Red Hat OpenStack Platform REST API to create cloud instances and control their power management states. Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . A Capsule Server managing a network in your OpenStack environment. For more information, see Configuring Networking in Provisioning hosts . An image added to OpenStack Image Storage (glance) service for image-based provisioning. For more information, see the Red Hat OpenStack Platform Instances and Images Guide . Additional resources You can configure Satellite to remove the associated virtual machine when you delete a host. For more information, see Section 2.22, "Removing a virtual machine upon host deletion" . 12.1. Adding a Red Hat OpenStack Platform connection to Satellite Server You can add Red Hat OpenStack Platform as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Click Create Compute Resource . In the Name field, enter a name for the new compute resource. From the Provider list, select RHEL OpenStack Platform . Optional: In the Description field, enter a description for the compute resource. In the URL field, enter the URL for the OpenStack Authentication keystone service's API at the tokens resource, such as http://openstack.example.com:5000/v2.0/tokens or http://openstack.example.com:5000/v3/auth/tokens . In the Username and Password fields, enter the user authentication for Satellite to access the environment. Optional: In the Project (Tenant) name field, enter the name of your tenant (v2) or project (v3) for Satellite Server to manage. In the User domain field, enter the user domain for v3 authentication. In the Project domain name field, enter the project domain name for v3 authentication. In the Project domain ID field, enter the project domain ID for v3 authentication. Optional: Select Allow external network as main network to use external networks as primary networks for hosts. Optional: Click Test Connection to verify that Satellite can connect to your compute resource. Click the Locations and Organizations tabs and verify that the location and organization that you want to use are set to your current context. Add any additional contexts that you want to these tabs. Click Submit to save the Red Hat OpenStack Platform connection. CLI procedure To create a compute resource, enter the hammer compute-resource create command: 12.2. Adding Red Hat OpenStack Platform images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the Red Hat OpenStack Platform connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. From the Image list, select an image from the Red Hat OpenStack Platform compute resource. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Red Hat OpenStack Platform server. 12.3. Adding Red Hat OpenStack Platform details to a compute profile Use this procedure to add Red Hat OpenStack Platform hardware settings to a compute profile. When you create a host on Red Hat OpenStack Platform using this compute profile, these settings are automatically populated. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the Red Hat OpenStack Platform compute resource. From the Flavor list, select the hardware profile on Red Hat OpenStack Platform to use for the host. From the Availability zone list, select the target cluster to use within the Red Hat OpenStack Platform environment. From the Image list, select the image to use for image-based provisioning. From the Tenant list, select the tenant or project for the Red Hat OpenStack Platform instance. From the Security Group list, select the cloud-based access rules for ports and IP addresses. From the Internal network , select the private networks for the host to join. From the Floating IP network , select the external networks for the host to join and assign a floating IP address. From the Boot from volume , select whether a volume is created from the image. If not selected, the instance boots the image directly. In the New boot volume size (GB) field, enter the size, in GB, of the new boot volume. Click Submit to save the compute profile. CLI procedure Set Red Hat OpenStack Platform details to a compute profile: 12.4. Creating image-based hosts on Red Hat OpenStack Platform In Satellite, you can use Red Hat OpenStack Platform provisioning to create hosts from an existing image. The new host entry triggers the Red Hat OpenStack Platform server to create the instance using the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the Red Hat OpenStack Platform connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. From the Lifecycle Environment list, select the environment. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. Red Hat OpenStack Platform assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and confirm that all fields automatically contain values. If you want to change the image that populates automatically from your compute profile, from the Images list, select a different image to base the new host's root volume on. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure Create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.
|
[
"hammer compute-resource create --name \" My_OpenStack \" --provider \"OpenStack\" --description \" My OpenStack environment at openstack.example.com \" --url \" http://openstack.example.com :5000/v3/auth/tokens\" --user \" My_Username \" --password \" My_Password \" --tenant \" My_Openstack \" --domain \" My_User_Domain \" --project-domain-id \" My_Project_Domain_ID \" --project-domain-name \" My_Project_Domain_Name \" --locations \"New York\" --organizations \" My_Organization \"",
"hammer compute-resource image create --name \"OpenStack Image\" --compute-resource \" My_OpenStack_Platform \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --user-data true --uuid \" /path/to/OpenstackImage.qcow2 \"",
"hammer compute-profile values create --compute-resource \" My_Laptop \" --compute-profile \" My_Compute_Profile \" --compute-attributes \"availability_zone= My_Zone ,image_ref= My_Image ,flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network= My_Network ,boot_from_volume=false\"",
"hammer host create --compute-attributes=\"flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network=mynetwork\" --compute-resource \" My_OpenStack_Platform \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_OpenStack_Image \" --interface \"managed=true,primary=true,provision=true\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method image"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/provisioning_cloud_instances_openstack_openstack-provisioning
|
Chapter 2. Container security
|
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.16 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Building, running, and managing containers from the RHEL 9 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the Red Hat OpenShift security guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules Disk encryption Chrony time service About the OpenShift Update Service FIPS cryptography 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS mode or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. Additional resources FIPS cryptography 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 9 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to nodes Optional configuration parameters Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.16.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7, 8, and 9 ( ubi7/ubi , ubi8/ubi , and ubi9/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal , ubi8/ubi-mimimal , and ubi9/ubi-minimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.16 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
|
[
"variant: openshift version: 4.16.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml",
"oc apply -f 51-worker-rh-registry-trust.yaml",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1",
"oc debug node/<node_name>",
"sh-4.2# chroot /host",
"docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>",
"oc describe machineconfigpool/worker",
"Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3",
"oc debug node/<node> -- chroot /host cat /etc/containers/policy.json",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2",
"oc adm release info <release_version> \\ 1",
"--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---",
"curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt",
"curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1",
"skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1",
"skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key",
"Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55",
"quality.images.openshift.io/<qualityType>.<providerId>: {}",
"quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}",
"{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }",
"{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }",
"oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'",
"annotations: images.openshift.io/deny-execution: true",
"curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'",
"{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }",
"oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc",
"source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc",
"oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc",
"oc set triggers deploy/deployment-example --from-image=example:latest --containers=web",
"{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }",
"docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc get event -n default | grep Node",
"1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure",
"oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'",
"{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }",
"oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'",
"4"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/container-security-1
|
probe::netdev.change_mac
|
probe::netdev.change_mac Name probe::netdev.change_mac - Called when the netdev_name has the MAC changed Synopsis Values dev_name The device that will have the MTU changed new_mac The new MAC address mac_len The MAC length old_mac The current MAC address
|
[
"netdev.change_mac"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-netdev-change-mac
|
Chapter 4. Resolved issues
|
Chapter 4. Resolved issues There are no resolved issues for this release. For details of any security fixes in this release, see the errata links in Advisories related to this release .
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_1_release_notes/resolved_issues
|
Acknowledgements
|
Acknowledgements Thank you to everyone who provided feedback as part of the RHEL 8 Readiness Challenge. The top 3 winners are: Sterling Alexander John Pittman Jake Hunsaker
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/con_acknowledgements
|
8.4. Configuration Tools
|
8.4. Configuration Tools Red Hat Enterprise Linux provides a number of tools to assist administrators in configuring the storage and file systems. This section outlines the available tools and provides examples of how they can be used to solve I/O and file system related performance problems in Red Hat Enterprise Linux 7. 8.4.1. Configuring Tuning Profiles for Storage Performance The Tuned service provides a number of profiles designed to improve performance for specific use cases. The following profiles are particularly useful for improving storage performance. latency-performance throughput-performance (the default) To configure a profile on your system, run the following command, replacing name with the name of the profile you want to use. The tuned-adm recommend command recommends an appropriate profile for your system. For further details about these profiles or additional configuration options, see Section A.5, "tuned-adm" . 8.4.2. Setting the Default I/O Scheduler The default I/O scheduler is the scheduler that is used if no other scheduler is explicitly specified for the device. If no default scheduler is specified, the cfq scheduler is used for SATA drives, and the deadline scheduler is used for all other drives. If you specify a default scheduler by following the instructions in this section, that default scheduler is applied to all devices. To set the default I/O scheduler, you can use the Tuned tool, or modify the /etc/default/grub file manually. Red Hat recommends using the Tuned tool to specify the default I/O scheduler on a booted system. To set the elevator parameter, enable the disk plug-in. For information on the disk plug-in, see Section 3.1.1, "Plug-ins" in the Tuned chapter. To modify the default scheduler by using GRUB 2, append the elevator parameter to the kernel command line, either at boot time, or when the system is booted. You can use the Tuned tool, or modify the /etc/default/grub file manually, as described in Procedure 8.1, "Setting the Default I/O Scheduler by Using GRUB 2" . Procedure 8.1. Setting the Default I/O Scheduler by Using GRUB 2 To set the default I/O Scheduler on a booted system and make the configuration persist after reboot: Add the elevator parameter to the GRUB_CMDLINE_LINUX line in the /etc/default/grub file. In Red Hat Enterprise Linux 7, the available schedulers are deadline , noop , and cfq . For more information, see the cfq-iosched.txt and deadline-iosched.txt files in the documentation for your kernel, available after installing the kernel-doc package. Create a new configuration with the elevator parameter added. The location of the GRUB 2 configuration file is different on systems with the BIOS firmware and on systems with UEFI. Use one of the following commands to recreate the GRUB 2 configuration file. On a system with the BIOS firmware, use: On a system with the UEFI firmware, use: Reboot the system for the change to take effect. For more information on version 2 of the GNU GRand Unified Bootloader (GRUB 2), see the Working with the GRUB 2 Boot Loader chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide . 8.4.3. Generic Block Device Tuning Parameters The generic tuning parameters listed in this section are available within the /sys/block/sd X /queue/ directory. The listed tuning parameters are separate from I/O scheduler tuning, and are applicable to all I/O schedulers. add_random Some I/O events contribute to the entropy pool for /dev/random . This parameter can be set to 0 if the overhead of these contributions becomes measurable. iostats The default value is 1 ( enabled ). Setting iostats to 0 disables the gathering of I/O statistics for the device, which removes a small amount of overhead with the I/O path. Setting iostats to 0 might slightly improve performance for very high performance devices, such as certain NVMe solid-state storage devices. It is recommended to leave iostats enabled unless otherwise specified for the given storage model by the vendor. If you disable iostats , the I/O statistics for the device are no longer present within the /proc/diskstats file. The content of /sys/diskstats is the source of I/O information for monitoring I/O tools, such as sar or iostats . Therefore, if you disable the iostats parameter for a device, the device is no longer present in the output of I/O monitoring tools. max_sectors_kb Specifies the maximum size of an I/O request in kilobytes. The default value is 512 KB. The minimum value for this parameter is determined by the logical block size of the storage device. The maximum value for this parameter is determined by the value of max_hw_sectors_kb . Certain solid-state disks perform poorly when the I/O requests are larger than the internal erase block size. To determine if this is the case of the solid-state disk model attached to the system, check with the hardware vendor, and follow their recommendations. Red Hat recommends max_sectors_kb to always be a multiple of the optimal I/O size and the internal erase block size. Use a value of logical_block_size for either parameter if they are zero or not specified by the storage device. nomerges Most workloads benefit from request merging. However, disabling merges can be useful for debugging purposes. By default, the nomerges parameter is set to 0 , which enables merging. To disable simple one-hit merging, set nomerges to 1 . To disable all types of merging, set nomerges to 2 . nr_requests Specifies the maximum number of read and write requests that can be queued at one time. The default value is 128 , which means that 128 read requests and 128 write requests can be queued before the process to request a read or write is put to sleep. For latency-sensitive applications, lower the value of this parameter and limit the command queue depth on the storage so that write-back I/O cannot fill the device queue with write requests. When the device queue fills, other processes attempting to perform I/O operations are put to sleep until queue space becomes available. Requests are then allocated in a round-robin manner, which prevents one process from continuously consuming all spots in the queue. The maximum number of I/O operations within the I/O scheduler is nr_requests*2 . As stated, nr_requests is applied separately for reads and writes. Note that nr_requests only applies to the I/O operations within the I/O scheduler and not to I/O operations already dispatched to the underlying device. Therefore, the maximum outstanding limit of I/O operations against a device is (nr_requests*2)+(queue_depth) where queue_depth is /sys/block/sdN/device/queue_depth , sometimes also referred to as the LUN queue depth. You can see this total outstanding number of I/O operations in, for example, the output of iostat in the avgqu-sz column. optimal_io_size Some storage devices report an optimal I/O size through this parameter. If this value is reported, Red Hat recommends that applications issue I/O aligned to and in multiples of the optimal I/O size wherever possible. read_ahead_kb Defines the maximum number of kilobytes that the operating system may read ahead during a sequential read operation. As a result, the likely-needed information is already present within the kernel page cache for the sequential read, which improves read I/O performance. Device mappers often benefit from a high read_ahead_kb value. 128 KB for each device to be mapped is a good starting point, but increasing the read_ahead_kb value up to 4-8 MB might improve performance in application environments where sequential reading of large files takes place. rotational Some solid-state disks do not correctly advertise their solid-state status, and are mounted as traditional rotational disks. If your solid-state device does does not set this to 0 automatically, set it manually to disable unnecessary seek-reducing logic in the scheduler. rq_affinity By default, I/O completions can be processed on a different processor than the processor that issued the I/O request. Set rq_affinity to 1 to disable this ability and perform completions only on the processor that issued the I/O request. This can improve the effectiveness of processor data caching. scheduler To set the scheduler or scheduler preference order for a particular storage device, edit the /sys/block/ devname /queue/scheduler file, where devname is the name of the device you want to configure. 8.4.4. Tuning the Deadline Scheduler When deadline is in use, queued I/O requests are sorted into a read or write batch and then scheduled for execution in increasing LBA order. Read batches take precedence over write batches by default, as applications are more likely to block on read I/O. After a batch is processed, deadline checks how long write operations have been starved of processor time and schedules the read or write batch as appropriate. The following parameters affect the behavior of the deadline scheduler. fifo_batch The number of read or write operations to issue in a single batch. The default value is 16 . A higher value can increase throughput, but will also increase latency. front_merges If your workload will never generate front merges, this tunable can be set to 0 . However, unless you have measured the overhead of this check, Red Hat recommends the default value of 1 . read_expire The number of milliseconds in which a read request should be scheduled for service. The default value is 500 (0.5 seconds). write_expire The number of milliseconds in which a write request should be scheduled for service. The default value is 5000 (5 seconds). writes_starved The number of read batches that can be processed before processing a write batch. The higher this value is set, the greater the preference given to read batches. 8.4.5. Tuning the CFQ Scheduler When CFQ is in use, processes are placed into three classes: real time, best effort, and idle. All real time processes are scheduled before any best effort processes, which are scheduled before any idle processes. By default, processes are classed as best effort. You can manually adjust the class of a process with the ionice command. You can further adjust the behavior of the CFQ scheduler with the following parameters. These parameters are set on a per-device basis by altering the specified files under the /sys/block/ devname /queue/iosched directory. back_seek_max The maximum distance in kilobytes that CFQ will perform a backward seek. The default value is 16 KB. Backward seeks typically damage performance, so large values are not recommended. back_seek_penalty The multiplier applied to backward seeks when the disk head is deciding whether to move forward or backward. The default value is 2 . If the disk head position is at 1024 KB, and there are equidistant requests in the system (1008 KB and 1040 KB, for example), the back_seek_penalty is applied to backward seek distances and the disk moves forward. fifo_expire_async The length of time in milliseconds that an asynchronous (buffered write) request can remain unserviced. After this amount of time expires, a single starved asynchronous request is moved to the dispatch list. The default value is 250 milliseconds. fifo_expire_sync The length of time in milliseconds that a synchronous (read or O_DIRECT write) request can remain unserviced. After this amount of time expires, a single starved synchronous request is moved to the dispatch list. The default value is 125 milliseconds. group_idle This parameter is set to 0 (disabled) by default. When set to 1 (enabled), the cfq scheduler idles on the last process that is issuing I/O in a control group. This is useful when using proportional weight I/O control groups and when slice_idle is set to 0 (on fast storage). group_isolation This parameter is set to 0 (disabled) by default. When set to 1 (enabled), it provides stronger isolation between groups, but reduces throughput, as fairness is applied to both random and sequential workloads. When group_isolation is disabled (set to 0 ), fairness is provided to sequential workloads only. For more information, see the installed documentation in /usr/share/doc/kernel-doc-version/Documentation/cgroups/blkio-controller.txt . low_latency This parameter is set to 1 (enabled) by default. When enabled, cfq favors fairness over throughput by providing a maximum wait time of 300 ms for each process issuing I/O on a device. When this parameter is set to 0 (disabled), target latency is ignored and each process receives a full time slice. quantum This parameter defines the number of I/O requests that cfq sends to one device at one time, essentially limiting queue depth. The default value is 8 requests. The device being used may support greater queue depth, but increasing the value of quantum will also increase latency, especially for large sequential write work loads. slice_async This parameter defines the length of the time slice (in milliseconds) allotted to each process issuing asynchronous I/O requests. The default value is 40 milliseconds. slice_idle This parameter specifies the length of time in milliseconds that cfq idles while waiting for further requests. The default value is 0 (no idling at the queue or service tree level). The default value is ideal for throughput on external RAID storage, but can degrade throughput on internal non-RAID storage as it increases the overall number of seek operations. slice_sync This parameter defines the length of the time slice (in milliseconds) allotted to each process issuing synchronous I/O requests. The default value is 100 ms. 8.4.5.1. Tuning CFQ for Fast Storage The cfq scheduler is not recommended for hardware that does not suffer a large seek penalty, such as fast external storage arrays or solid-state disks. If your use case requires cfq to be used on this storage, you will need to edit the following configuration files: Set /sys/block/ devname /queue/iosched/slice_idle to 0 Set /sys/block/ devname /queue/iosched/quantum to 64 Set /sys/block/ devname /queue/iosched/group_idle to 1 8.4.6. Tuning the noop Scheduler The noop I/O scheduler is primarily useful for CPU-bound systems that use fast storage. Also, the noop I/O scheduler is commonly, but not exclusively, used on virtual machines when they are performing I/O operations to virtual disks. There are no tunable parameters specific to the noop I/O scheduler. 8.4.7. Configuring File Systems for Performance This section covers the tuning parameters specific to each file system supported in Red Hat Enterprise Linux 7. Parameters are divided according to whether their values should be configured when you format the storage device, or when you mount the formatted device. Where loss in performance is caused by file fragmentation or resource contention, performance can generally be improved by reconfiguring the file system. However, in some cases the application may need to be altered. In this case, Red Hat recommends contacting Customer Support for assistance. 8.4.7.1. Tuning XFS This section covers some of the tuning parameters available to XFS file systems at format and at mount time. The default formatting and mount settings for XFS are suitable for most workloads. Red Hat recommends changing them only if specific configuration changes are expected to benefit your workload. 8.4.7.1.1. Formatting Options For further details about any of these formatting options, see the man page: Directory block size The directory block size affects the amount of directory information that can be retrieved or modified per I/O operation. The minimum value for directory block size is the file system block size (4 KB by default). The maximum value for directory block size is 64 KB. At a given directory block size, a larger directory requires more I/O than a smaller directory. A system with a larger directory block size also consumes more processing power per I/O operation than a system with a smaller directory block size. It is therefore recommended to have as small a directory and directory block size as possible for your workload. Red Hat recommends the directory block sizes listed in Table 8.1, "Recommended Maximum Directory Entries for Directory Block Sizes" for file systems with no more than the listed number of entries for write-heavy and read-heavy workloads. Table 8.1. Recommended Maximum Directory Entries for Directory Block Sizes Directory block size Max. entries (read-heavy) Max. entries (write-heavy) 4 KB 100,000-200,000 1,000,000-2,000,000 16 KB 100,000-1,000,000 1,000,000-10,000,000 64 KB >1,000,000 >10,000,000 For detailed information about the effect of directory block size on read and write workloads in file systems of different sizes, see the XFS documentation. To configure directory block size, use the mkfs.xfs -l option. See the mkfs.xfs man page for details. Allocation groups An allocation group is an independent structure that indexes free space and allocated inodes across a section of the file system. Each allocation group can be modified independently, allowing XFS to perform allocation and deallocation operations concurrently as long as concurrent operations affect different allocation groups. The number of concurrent operations that can be performed in the file system is therefore equal to the number of allocation groups. However, since the ability to perform concurrent operations is also limited by the number of processors able to perform the operations, Red Hat recommends that the number of allocation groups be greater than or equal to the number of processors in the system. A single directory cannot be modified by multiple allocation groups simultaneously. Therefore, Red Hat recommends that applications that create and remove large numbers of files do not store all files in a single directory. To configure allocation groups, use the mkfs.xfs -d option. See the mkfs.xfs man page for details. Growth constraints If you may need to increase the size of your file system after formatting time (either by adding more hardware or through thin-provisioning), you must carefully consider initial file layout, as allocation group size cannot be changed after formatting is complete. Allocation groups must be sized according to the eventual capacity of the file system, not the initial capacity. The number of allocation groups in the fully-grown file system should not exceed several hundred, unless allocation groups are at their maximum size (1 TB). Therefore for most file systems, the recommended maximum growth to allow for a file system is ten times the initial size. Additional care must be taken when growing a file system on a RAID array, as the device size must be aligned to an exact multiple of the allocation group size so that new allocation group headers are correctly aligned on the newly added storage. The new storage must also have the same geometry as the existing storage, since geometry cannot be changed after formatting time, and therefore cannot be optimized for storage of a different geometry on the same block device. Inode size and inline attributes If the inode has sufficient space available, XFS can write attribute names and values directly into the inode. These inline attributes can be retrieved and modified up to an order of magnitude faster than retrieving separate attribute blocks, as additional I/O is not required. The default inode size is 256 bytes. Only around 100 bytes of this is available for attribute storage, depending on the number of data extent pointers stored in the inode. Increasing inode size when you format the file system can increase the amount of space available for storing attributes. Both attribute names and attribute values are limited to a maximum size of 254 bytes. If either name or value exceeds 254 bytes in length, the attribute is pushed to a separate attribute block instead of being stored inline. To configure inode parameters, use the mkfs.xfs -i option. See the mkfs.xfs man page for details. RAID If software RAID is in use, mkfs.xfs automatically configures the underlying hardware with an appropriate stripe unit and width. However, stripe unit and width may need to be manually configured if hardware RAID is in use, as not all hardware RAID devices export this information. To configure stripe unit and width, use the mkfs.xfs -d option. See the mkfs.xfs man page for details. Log size Pending changes are aggregated in memory until a synchronization event is triggered, at which point they are written to the log. The size of the log determines the number of concurrent modifications that can be in-progress at one time. It also determines the maximum amount of change that can be aggregated in memory, and therefore how often logged data is written to disk. A smaller log forces data to be written back to disk more frequently than a larger log. However, a larger log uses more memory to record pending modifications, so a system with limited memory will not benefit from a larger log. Logs perform better when they are aligned to the underlying stripe unit; that is, they start and end at stripe unit boundaries. To align logs to the stripe unit, use the mkfs.xfs -d option. See the mkfs.xfs man page for details. To configure the log size, use the following mkfs.xfs option, replacing logsize with the size of the log: For further details, see the mkfs.xfs man page: Log stripe unit Log writes on storage devices that use RAID5 or RAID6 layouts may perform better when they start and end at stripe unit boundaries (are aligned to the underlying stripe unit). mkfs.xfs attempts to set an appropriate log stripe unit automatically, but this depends on the RAID device exporting this information. Setting a large log stripe unit can harm performance if your workload triggers synchronization events very frequently, because smaller writes need to be padded to the size of the log stripe unit, which can increase latency. If your workload is bound by log write latency, Red Hat recommends setting the log stripe unit to 1 block so that unaligned log writes are triggered as possible. The maximum supported log stripe unit is the size of the maximum log buffer size (256 KB). It is therefore possible that the underlying storage may have a larger stripe unit than can be configured on the log. In this case, mkfs.xfs issues a warning and sets a log stripe unit of 32 KB. To configure the log stripe unit, use one of the following options, where N is the number of blocks to use as the stripe unit, and size is the size of the stripe unit in KB. For further details, see the mkfs.xfs man page: 8.4.7.1.2. Mount Options Inode allocation Highly recommended for file systems greater than 1 TB in size. The inode64 parameter configures XFS to allocate inodes and data across the entire file system. This ensures that inodes are not allocated largely at the beginning of the file system, and data is not largely allocated at the end of the file system, improving performance on large file systems. Log buffer size and number The larger the log buffer, the fewer I/O operations it takes to write all changes to the log. A larger log buffer can improve performance on systems with I/O-intensive workloads that do not have a non-volatile write cache. The log buffer size is configured with the logbsize mount option, and defines the maximum amount of information that can be stored in the log buffer; if a log stripe unit is not set, buffer writes can be shorter than the maximum, and therefore there is no need to reduce the log buffer size for synchronization-heavy workloads. The default size of the log buffer is 32 KB. The maximum size is 256 KB and other supported sizes are 64 KB, 128 KB or power of 2 multiples of the log stripe unit between 32 KB and 256 KB. The number of log buffers is defined by the logbufs mount option. The default value is 8 log buffers (the maximum), but as few as two log buffers can be configured. It is usually not necessary to reduce the number of log buffers, except on memory-bound systems that cannot afford to allocate memory to additional log buffers. Reducing the number of log buffers tends to reduce log performance, especially on workloads sensitive to log I/O latency. Delay change logging XFS has the option to aggregate changes in memory before writing them to the log. The delaylog parameter allows frequently modified metadata to be written to the log periodically instead of every time it changes. This option increases the potential number of operations lost in a crash and increases the amount of memory used to track metadata. However, it can also increase metadata modification speed and scalability by an order of magnitude, and does not reduce data or metadata integrity when fsync , fdatasync , or sync are used to ensure data and metadata is written to disk. For more information on mount options, see man xfs 8.4.7.2. Tuning ext4 This section covers some of the tuning parameters available to ext4 file systems at format and at mount time. 8.4.7.2.1. Formatting Options Inode table initialization Initializing all inodes in the file system can take a very long time on very large file systems. By default, the initialization process is deferred (lazy inode table initialization is enabled). However, if your system does not have an ext4 driver, lazy inode table initialization is disabled by default. It can be enabled by setting lazy_itable_init to 1). In this case, kernel processes continue to initialize the file system after it is mounted. This section describes only some of the options available at format time. For further formatting parameters, see the mkfs.ext4 man page: 8.4.7.2.2. Mount Options Inode table initialization rate When lazy inode table initialization is enabled, you can control the rate at which initialization occurs by specifying a value for the init_itable parameter. The amount of time spent performing background initialization is approximately equal to 1 divided by the value of this parameter. The default value is 10 . Automatic file synchronization Some applications do not correctly perform an fsync after renaming an existing file, or after truncating and rewriting. By default, ext4 automatically synchronizes files after each of these operations. However, this can be time consuming. If this level of synchronization is not required, you can disable this behavior by specifying the noauto_da_alloc option at mount time. If noauto_da_alloc is set, applications must explicitly use fsync to ensure data persistence. Journal I/O priority By default, journal I/O has a priority of 3 , which is slightly higher than the priority of normal I/O. You can control the priority of journal I/O with the journal_ioprio parameter at mount time. Valid values for journal_ioprio range from 0 to 7 , with 0 being the highest priority I/O. This section describes only some of the options available at mount time. For further mount options, see the mount man page: 8.4.7.3. Tuning Btrfs Starting with Red Hat Enterprise Linux 7.0, Btrfs is provided as a Technology Preview. Tuning should always be done to optimize the system based on its current workload. For information on creation and mounting options, see the chapter on Btrfs in the Red Hat Enterprise Linux 7 Storage Administration Guide . Data Compression The default compression algorithm is zlib, but a specific workload can give a reason to change the compression algorithm. For example, if you have a single thread with heavy file I/O, using the lzo algorithm can be more preferable. Options at mount time are: compress=zlib - the default option with a high compression ratio, safe for older kernels. compress=lzo - compression faster, but lower, than zlib. compress=no - disables compression. compress-force= method - enables compression even for files that do not compress well, such as videos and disk images. The available methods are zlib and lzo . Only files created or changed after the mount option is added will be compressed. To compress existing files, run the following command after you replace method with either zlib or lzo : To re-compress the file using lzo , run: 8.4.7.4. Tuning GFS2 This section covers some of the tuning parameters available to GFS2 file systems at format and at mount time. Directory spacing All directories created in the top-level directory of the GFS2 mount point are automatically spaced to reduce fragmentation and increase write speed in those directories. To space another directory like a top-level directory, mark that directory with the T attribute, as shown, replacing dirname with the path to the directory you wish to space: chattr is provided as part of the e2fsprogs package. Reduce contention GFS2 uses a global locking mechanism that can require communication between the nodes of a cluster. Contention for files and directories between multiple nodes lowers performance. You can minimize the risk of cross-cache invalidation by minimizing the areas of the file system that are shared between multiple nodes.
|
[
"tuned-adm profile name",
"cat /etc/default/grub GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=vg00/lvroot rd.lvm.lv=vg00/lvswap elevator=noop \"",
"grub2-mkconfig -o /etc/grub2.cfg",
"grub2-mkconfig -o /etc/grub2-efi.cfg",
"echo cfq > /sys/block/hda/queue/scheduler",
"man mkfs.xfs",
"mkfs.xfs -l size= logsize",
"man mkfs.xfs",
"mkfs.xfs -l sunit= N b mkfs.xfs -l su= size",
"man mkfs.xfs",
"man mkfs.ext4",
"man mount",
"btrfs filesystem defragment -c method",
"btrfs filesystem defragment -r -v -clzo /",
"chattr +T dirname"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Storage_and_File_Systems-Configuration_tools
|
Chapter 24. Registering the system by using RHEL system roles
|
Chapter 24. Registering the system by using RHEL system roles The rhc RHEL system role enables administrators to automate the registration of multiple systems with Red Hat Subscription Management (RHSM) and Satellite servers. The role also supports Insights-related configuration and management tasks by using Ansible. 24.1. Introduction to the rhc RHEL system role RHEL system role is a set of roles that provides a consistent configuration interface to remotely manage multiple systems. The remote host configuration ( rhc ) RHEL system role enables administrators to easily register RHEL systems to Red Hat Subscription Management (RHSM) and Satellite servers. By default, when you register a system by using the rhc RHEL system role, the system is connected to Insights. Additionally, with the rhc RHEL system role, you can: Configure connections to Red Hat Insights Enable and disable repositories Configure the proxy to use for the connection Configure insights remediations and, auto updates Set the release of the system Configure insights tags 24.2. Registering a system by using the rhc RHEL system role You can register your system to Red Hat by using the rhc RHEL system role. By default, the rhc RHEL system role connects the system to Red Hat Insights when you register it. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: activationKey: <activation_key> username: <username> password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: To register by using an activation key and organization ID (recommended), use the following playbook: --- - name: Registering system using activation key and organization ID hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: activation_keys: keys: - "{{ activationKey }}" rhc_organization: organizationID To register by using a username and password, use the following playbook: --- - name: Registering system with username and password hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: "{{ username }}" password: "{{ password }}" roles: - role: rhel-system-roles.rhc Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory Ansible Vault 24.3. Registering a system with Satellite by using the rhc RHEL system role When organizations use Satellite to manage systems, it is necessary to register the system through Satellite. You can remotely register your system with Satellite by using the rhc RHEL system role. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: activationKey: <activation_key> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Register to the custom registration server and CDN hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: activation_keys: keys: - "{{ activationKey }}" rhc_organization: organizationID rhc_server: hostname: example.com port: 443 prefix: /rhsm rhc_baseurl: http://example.com/pulp/content Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory Ansible Vault 24.4. Disabling the connection to Insights after the registration by using the rhc RHEL system role When you register a system by using the rhc RHEL system role, the role by default, enables the connection to Red Hat Insights. You can disable it by using the rhc RHEL system role, if not required. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You have registered the system. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Disable Insights connection hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: state: absent Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory 24.5. Enabling repositories by using the rhc RHEL system role You can remotely enable or disable repositories on managed nodes by using the rhc RHEL system role. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You have details of the repositories which you want to enable or disable on the managed nodes. You have registered the system. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: To enable a repository: --- - name: Enable repository hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_repositories: - {name: "RepositoryName", state: enabled} To disable a repository: --- - name: Disable repository hosts: managed-node-01.example.com vars: rhc_repositories: - {name: "RepositoryName", state: disabled} roles: - role: rhel-system-roles.rhc Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory 24.6. Setting release versions by using the rhc RHEL system role You can limit the system to use only repositories for a particular minor RHEL version instead of the latest one. This way, you can lock your system to a specific minor RHEL version. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You know the minor RHEL version to which you want to lock the system. Note that you can only lock the system to the RHEL minor version that the host currently runs or a later minor version. You have registered the system. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Set Release hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_release: "8.6" Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory 24.7. Using a proxy server when registering the host by using the rhc RHEL system role If your security restrictions allow access to the Internet only through a proxy server, you can specify the proxy's settings in the playbook when you register the system using the rhc RHEL system role. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: username: <username> password: <password> proxy_username: <proxyusernme> proxy_password: <proxypassword> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: To register to the Red Hat Customer Portal by using a proxy: --- - name: Register using proxy hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: "{{ username }}" password: "{{ password }}" rhc_proxy: hostname: proxy.example.com port: 3128 username: "{{ proxy_username }}" password: "{{ proxy_password }}" To remove the proxy server from the configuration of the Red Hat Subscription Manager service: --- - name: To stop using proxy server for registration hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: "{{ username }}" password: "{{ password }}" rhc_proxy: {"state":"absent"} roles: - role: rhel-system-roles.rhc Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory Ansible Vault 24.8. Disabling auto updates of Insights rules by using the rhc RHEL system role You can disable the automatic collection rule updates for Red Hat Insights by using the rhc RHEL system role. By default, when you connect your system to Red Hat Insights, this option is enabled. You can disable it by using the rhc RHEL system role. Note If you disable this feature, you risk using outdated rule definition files and not getting the most recent validation updates. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You have registered the system. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: username: <username> password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Disable Red Hat Insights autoupdates hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: "{{ username }}" password: "{{ password }}" rhc_insights: autoupdate: false state: present Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory Ansible Vault 24.9. Disabling Insights remediations by using the rhc RHEL system role You can configure systems to automatically update the dynamic configuration by using the rhc RHEL system role. When you connect your system to Red hat Insights, it is enabled by default. You can disable it, if not required. Note Enabling remediation with the rhc RHEL system role ensures your system is ready to be remediated when connected directly to Red Hat. For systems connected to a Satellite, or Capsule, enabling remediation must be achieved differently. For more information about Red Hat Insights remediations, see Red Hat Insights Remediations Guide . Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You have Insights remediations enabled. You have registered the system. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Disable remediation hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: remediation: absent state: present Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory 24.10. Configuring Insights tags by using the rhc RHEL system role You can use tags for system filtering and grouping. You can also customize tags based on the requirements. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: username: <username> password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Creating tags hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: "{{ username }}" password: "{{ password }}" rhc_insights: tags: group: group-name-value location: location-name-value description: - RHEL8 - SAP sample_key:value state: present Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory Ansible Vault 24.11. Unregistering a system by using the rhc RHEL system role You can unregister the system from Red Hat if you no longer need the subscription service. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The system is already registered. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Unregister the system hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_state: absent Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.rhc/README.md file /usr/share/doc/rhel-system-roles/rhc/ directory
|
[
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"activationKey: <activation_key> username: <username> password: <password>",
"--- - name: Registering system using activation key and organization ID hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: activation_keys: keys: - \"{{ activationKey }}\" rhc_organization: organizationID",
"--- - name: Registering system with username and password hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"activationKey: <activation_key>",
"--- - name: Register to the custom registration server and CDN hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: activation_keys: keys: - \"{{ activationKey }}\" rhc_organization: organizationID rhc_server: hostname: example.com port: 443 prefix: /rhsm rhc_baseurl: http://example.com/pulp/content",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Disable Insights connection hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: state: absent",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Enable repository hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_repositories: - {name: \"RepositoryName\", state: enabled}",
"--- - name: Disable repository hosts: managed-node-01.example.com vars: rhc_repositories: - {name: \"RepositoryName\", state: disabled} roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Set Release hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_release: \"8.6\"",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password> proxy_username: <proxyusernme> proxy_password: <proxypassword>",
"--- - name: Register using proxy hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_proxy: hostname: proxy.example.com port: 3128 username: \"{{ proxy_username }}\" password: \"{{ proxy_password }}\"",
"--- - name: To stop using proxy server for registration hosts: managed-node-01.example.com vars_files: - vault.yml vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_proxy: {\"state\":\"absent\"} roles: - role: rhel-system-roles.rhc",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password>",
"--- - name: Disable Red Hat Insights autoupdates hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_insights: autoupdate: false state: present",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Disable remediation hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_insights: remediation: absent state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <password> Confirm New Vault password: <vault_password>",
"username: <username> password: <password>",
"--- - name: Creating tags hosts: managed-node-01.example.com vars_files: - vault.yml roles: - role: rhel-system-roles.rhc vars: rhc_auth: login: username: \"{{ username }}\" password: \"{{ password }}\" rhc_insights: tags: group: group-name-value location: location-name-value description: - RHEL8 - SAP sample_key:value state: present",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Unregister the system hosts: managed-node-01.example.com roles: - role: rhel-system-roles.rhc vars: rhc_state: absent",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/using-the-rhc-system-role-to-register-the-system_automating-system-administration-by-using-rhel-system-roles
|
Chapter 20. Manipulating the Domain XML
|
Chapter 20. Manipulating the Domain XML This section describes the XML format used to represent domains. Here the term domain refers to the root <domain> element required for all guest virtual machine. The domain XML has two attributes: type specifies the hypervisor used for running the domain. The allowed values are driver specific, but include KVM and others. id is a unique integer identifier for the running guest virtual machine. Inactive machines have no id value. The sections in this chapter will address the components of the domain XML. Additional chapters in this manual may refer to this chapter when manipulation of the domain XML is required. Note This chapter is based on the libvirt upstream documentation . 20.1. General Information and Metadata This information is in this part of the domain XML: <domain type='xen' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>Some human readable description</description> <metadata> <app1:foo xmlns:app1="http://app1.org/app1/">..</app1:foo> <app2:bar xmlns:app2="http://app1.org/app2/">..</app2:bar> </metadata> ... </domain> Figure 20.1. Domain XML metadata The components of this section of the domain XML are as follows: Table 20.1. General metadata elements Element Description <name> Assigns a name for the virtual machine. This name should consist only of alpha-numeric characters and is required to be unique within the scope of a single host physical machine. It is often used to form the filename for storing the persistent configuration files. <uuid> assigns a globally unique identifier for the virtual machine. The format must be RFC 4122-compliant, eg 3e3fce45-4f53-4fa7-bb32-11f34168b82b . If omitted when defining/creating a new machine, a random UUID is generated. It is also possible to provide the UUID with a sysinfo specification. <title> title Creates space for a short description of the domain. The title should not contain any newlines. <description> Different from the title, This data is not used by libvirt in any way, it can contain any information the user wants to display. <metadata> Can be used by applications to store custom metadata in the form of XML nodes/trees. Applications must use custom namespaces on their XML nodes/trees, with only one top-level element per namespace (if the application needs structure, they should have sub-elements to their namespace element)
|
[
"<domain type='xen' id='3'> <name>fv0</name> <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid> <title>A short description - title - of the domain</title> <description>Some human readable description</description> <metadata> <app1:foo xmlns:app1=\"http://app1.org/app1/\">..</app1:foo> <app2:bar xmlns:app2=\"http://app1.org/app2/\">..</app2:bar> </metadata> </domain>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/ch-lib-dom-xml
|
Chapter 3. Knative Serving
|
Chapter 3. Knative Serving Knative Serving supports developers who want to create, deploy, and manage cloud-native applications . It provides a set of objects as Kubernetes custom resource definitions (CRDs) that define and control the behavior of serverless workloads on an OpenShift Container Platform cluster. Developers use these CRDs to create custom resource (CR) instances that can be used as building blocks to address complex use cases. For example: Rapidly deploying serverless containers. Automatically scaling pods. 3.1. Knative Serving resources Service The service.serving.knative.dev CRD automatically manages the life cycle of your workload to ensure that the application is deployed and reachable through the network. It creates a route, a configuration, and a new revision for each change to a user created service, or custom resource. Most developer interactions in Knative are carried out by modifying services. Revision The revision.serving.knative.dev CRD is a point-in-time snapshot of the code and configuration for each modification made to the workload. Revisions are immutable objects and can be retained for as long as necessary. Route The route.serving.knative.dev CRD maps a network endpoint to one or more revisions. You can manage the traffic in several ways, including fractional traffic and named routes. Configuration The configuration.serving.knative.dev CRD maintains the desired state for your deployment. It provides a clean separation between code and configuration. Modifying a configuration creates a new revision.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/about_openshift_serverless/about-knative-serving
|
Chapter 19. Using the Eclipse IDE for C and C++ Application Development
|
Chapter 19. Using the Eclipse IDE for C and C++ Application Development Some developers prefer using an IDE instead of an array of command line tools. Red Hat makes available the Eclipse IDE with support for development of C and C++ applications. Using Eclipse to Develop C and C++ Applications A detailed description of the Eclipse IDE and its use for developing C and C++ applications is out of the scope of this document. Please refer to the resources linked below. Additional Resources Using Eclipse Eclipse documentation - C/C++ Development User Guide
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/creating_c_cpp_applications_using-eclipse-c-cpp
|
5.3.10. Removing Volume Groups
|
5.3.10. Removing Volume Groups To remove a volume group that contains no logical volumes, use the vgremove command.
|
[
"vgremove officevg Volume group \"officevg\" successfully removed"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_remove
|
Chapter 15. Cruise Control for cluster rebalancing
|
Chapter 15. Cruise Control for cluster rebalancing Important Cruise Control for cluster rebalancing is a Technology Preview only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can deploy Cruise Control to your AMQ Streams cluster and use it to rebalance the load across the Kafka brokers. Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four components (Load Monitor, Analyzer, Anomaly Detector, and Executor) and a REST API. When AMQ Streams and Cruise Control are both deployed to Red Hat Enterprise Linux, you can access Cruise Control features through the Cruise Control REST API. The following features are supported: Configuring optimization goals and capacity limits Using the /rebalance endpoint to: Generate optimization proposals , as dry runs, based on the configured optimization goals or user-provided goals supplied as request parameters Initiate an optimization proposal to rebalance the Kafka cluster Checking the progress of an active rebalance operation using the /user_tasks endpoint Stopping an active rebalance operation using the /stop_proposal_execution endpoint All other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor. The web UI component (Cruise Control Frontend) is not supported. Cruise Control for AMQ Streams on Red Hat Enterprise Linux is provided as a separate zipped distribution. For more information, see Section 15.2, "Downloading a Cruise Control archive" . 15.1. Why use Cruise Control? Cruise Control reduces the time and effort involved in running an efficient Kafka cluster, with a more evenly balanced workload across the brokers. A typical cluster can become unevenly loaded over time. Partitions that handle large amounts of message traffic might be unevenly distributed across the available brokers. To rebalance the cluster, administrators must monitor the load on brokers and manually reassign busy partitions to brokers with spare capacity. Cruise Control automates this cluster rebalancing process. It constructs a workload model of resource utilization, based on CPU, disk, and network load. Using a set of configurable optimization goals, you can instruct Cruise Control to generate dry run optimization proposals for more balanced partition assignments. After you have reviewed a dry run optimization proposal, you can instruct Cruise Control to initiate a cluster rebalance based on that proposal, or generate a new proposal. When a cluster rebalancing operation is complete, the brokers are used more effectively and the load on the Kafka cluster is more evenly balanced. Additional resources Cruise Control Wiki Section 15.5, "Optimization goals overview" Section 15.6, "Optimization proposals overview" Capacity configuration 15.2. Downloading a Cruise Control archive A zipped distribution of Cruise Control for AMQ Streams on Red Hat Enterprise Linux is available for download from the Red Hat Customer Portal . Procedure Download the latest version of the Red Hat AMQ Streams Cruise Control archive from the Red Hat Customer Portal . Create the /opt/cruise-control directory: sudo mkdir /opt/cruise-control Extract the contents of the Cruise Control ZIP file to the new directory: unzip amq-streams-y.y.y-cruise-control-bin.zip -d /opt/cruise-control Change the ownership of the /opt/cruise-control directory to the kafka user: sudo chown -R kafka:kafka /opt/cruise-control 15.3. Deploying the Cruise Control Metrics Reporter Before starting Cruise Control, you must configure the Kafka brokers to use the provided Cruise Control Metrics Reporter. When loaded at runtime, the Metrics Reporter sends metrics to the __CruiseControlMetrics topic, one of three auto-created topics . Cruise Control uses these metrics to create and update the workload model and to calculate optimization proposals. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Kafka and ZooKeeper are running. Section 15.2, "Downloading a Cruise Control archive" . Procedure For each broker in the Kafka cluster and one at a time: Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh Copy the Cruise Control Metrics Reporter .jar file to the Kafka libraries directory: cp /opt/cruise-control/libs/ cruise-control-metrics-reporter-y.y.yyy.redhat-0000x.jar /opt/kafka/libs In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure the Cruise Control Metrics Reporter: Add the CruiseControlMetricsReporter class to the metric.reporters configuration option. Do not remove any existing Metrics Reporters. Add the following configuration options and values to the Kafka configuration file: These options enable the Cruise Control Metrics Reporter to create the __CruiseControlMetrics topic with a log cleanup policy of DELETE . For more information, see Auto-created topics and Log cleanup policy for Cruise Control Metrics topic . Configure SSL, if required. In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure SSL between the Cruise Control Metrics Reporter and the Kafka broker by setting the relevant client configuration properties. The Metrics Reporter accepts all standard producer-specific configuration properties with the cruise.control.metrics.reporter prefix. For example: cruise.control.metrics.reporter.ssl.truststore.password . In the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ) configure SSL between the Kafka broker and the Cruise Control server by setting the relevant client configuration properties. Cruise Control inherits SSL client property options from Kafka and uses those properties for all Cruise Control server clients. Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh Repeat steps 1-5 for the remaining brokers. 15.4. Configuring and starting Cruise Control Configure the properties used by Cruise Control and then start the Cruise Control server using the cruise-control-start.sh script. The server is hosted on a single machine for the whole Kafka cluster. Three topics are auto-created when Cruise Control starts. For more information, see Auto-created topics . Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Section 15.2, "Downloading a Cruise Control archive" Section 15.3, "Deploying the Cruise Control Metrics Reporter" Procedure Edit the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ). Configure the properties shown in the following example configuration: # The Kafka cluster to control. bootstrap.servers=localhost:9092 1 # The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 # The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 # The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 # The list of supported goals goals={list of master optimization goals} 5 # The list of supported hard goals hard.goals={List of hard goals} 6 # How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 # The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8 1 Host and port numbers of the Kafka broker (always port 9092). 2 Replication factor of the Kafka metric sample store topic. If you are evaluating Cruise Control in a single-node Kafka and ZooKeeper cluster, set this property to 1. For production use, set this property to 2 or more. 3 The configuration file that sets the maximum capacity limits for broker resources. Use the file that applies to your Kafka deployment configuration. For more information, see Capacity configuration . 4 Comma-separated list of default optimization goals, using fully-qualified domain names (FQDNs). A number of master optimization goals (see 5) are already set as default optimization goals; you can add or remove goals if desired. For more information, see Section 15.5, "Optimization goals overview" . 5 Comma-separated list of master optimization goals, using FQDNs. To completely exclude goals from being used to generate optimization proposals, remove them from the list. For more information, see Section 15.5, "Optimization goals overview" . 6 Comma-separated list of hard goals, using FQDNs. Seven of the master optimization goals are already set as hard goals; you can add or remove goals if desired. For more information, see Section 15.5, "Optimization goals overview" . 7 The interval, in milliseconds, for refreshing the cached optimization proposal that is generated from the default optimization goals. For more information, see Section 15.6, "Optimization proposals overview" . 8 Host and port numbers of the ZooKeeper connection (always port 2181). Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./bin/cruise-control-start.sh config/cruisecontrol.properties PORT To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl 'http://HOST:PORT/kafkacruisecontrol/state' Auto-created topics The following table shows the three topics that are automatically created when Cruise Control starts. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 15.1. Auto-created topics Auto-created topic Created by Function __CruiseControlMetrics Cruise Control Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. __KafkaCruiseControlPartitionMetricSamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . __KafkaCruiseControlModelTrainingSamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To ensure that log compaction is disabled in the auto-created topics, make sure that you configure the Cruise Control Metrics Reporter as described in Section 15.3, "Deploying the Cruise Control Metrics Reporter" . Log compaction can remove records that are needed by Cruise Control and prevent it from working properly. Additional resources Log cleanup policy for Cruise Control Metrics topic 15.5. Optimization goals overview To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals . Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. AMQ Streams on Red Hat Enterprise Linux supports all the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity: Disk capacity, Network inbound capacity, Network outbound capacity CPU capacity Replica distribution Potential network output Resource distribution: Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution Leader bytes-in rate distribution Topic replica distribution CPU usage distribution Leader replica distribution Preferred leader election Kafka Assigner disk usage distribution Intra-broker disk capacity Intra-broker disk usage For more information on each optimization goal, see Goals in the Cruise Control Wiki . Goals configuration in the Cruise Control properties file You configure optimization goals in the cruisecontrol.properties file in the cruise-control/config/ directory. There are configurations for hard optimization goals that must be satisfied, as well as master and default optimization goals. Optional, user-provided optimization goals are set at runtime as parameters in requests to the /rebalance endpoint. Optimization goals are subject to any capacity limits on broker resources. The following sections describe each goal configuration in more detail. Master optimization goals The master optimization goals are available to all users. Goals that are not listed in the master optimization goals are not available for use in Cruise Control operations. The following master optimization goals are preset in the cruisecontrol.properties file, in the goals property, in descending priority order: For simplicity, we recommend that you do not change the preset master optimization goals, unless you need to completely exclude one or more goals from being used to generate optimization proposals. The priority order of the master optimization goals can be modified, if desired, in the configuration for default optimization goals. If you need to modify the preset master optimization goals, specify a list of goals, in descending priority order, in the goals property. Use fully-qualified domain names as shown in the cruisecontrol.properties file. You must specify at least one master goal, or Cruise Control will crash. Note If you change the preset master optimization goals, you must ensure that the configured hard.goals are a subset of the master optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. Hard goals and soft goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by the Analyzer and is not sent to the user. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. The following master optimization goals are preset as hard goals in the cruisecontrol.properties file, in the hard.goals property: To change the hard goals, edit the hard.goals property and specify the desired goals, using their fully-qualified domain names. Increasing the number of hard goals reduces the likelihood that Cruise Control will calculate and generate valid optimization proposals. Default optimization goals Cruise Control uses the default optimization goals list to generate the cached optimization proposal . For more information, see Section 15.6, "Optimization proposals overview" . You can override the default optimization goals at runtime by setting user-provided optimization goals . The following default optimization goals are preset in the cruisecontrol.properties file, in the default.goals property, in descending priority order: You must specify at least one default goal, or Cruise Control will crash. To modify the default optimization goals, specify a list of goals, in descending priority order, in the default.goals property. Default goals must be a subset of the master optimization goals; use fully-qualified domain names. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, as parameters in HTTP requests to the /rebalance endpoint. For more information, see Section 15.9, "Generating optimization proposals" . User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you send a request to the /rebalance endpoint containing a single goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the master optimization goals To ignore the configured hard goals in an optimization proposal, add the skip_hard_goals_check=true parameter to the request. Additional resources Section 15.8, "Cruise Control configuration" Configurations in the Cruise Control Wiki. 15.6. Optimization proposals overview An optimization proposal is a summary of proposed changes that, if applied, will produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources. When you make a POST request to the /rebalance endpoint, an optimization proposal is returned in response. Use the information in the proposal to decide whether to initiate a cluster rebalance based on the proposal. Alternatively, you can change the optimization goals and then generate another proposal. By default, optimization proposals are generated as dry runs that must be initiated separately. There is no limit to the number of optimization proposals that can be generated. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals . Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. The most recent cached optimization proposal is returned when the following goal configurations are used: The default optimization goals User-provided optimization goals that can be met by the current cached proposal To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the cruisecontrol.properties file. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Contents of optimization proposals The following table describes the properties contained in an optimization proposal. Table 15.2. Properties contained in an optimization proposal Property Description n inter-broker replica (y MB) moves n : The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. y MB : The sum of the size of each partition replica that will be moved to a separate broker. Performance impact during rebalance operation : Variable. The larger the number of MBs, the longer the cluster rebalance will take to complete. n intra-broker replica (y MB) moves n : The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but less than inter-broker replica moves . y MB : The sum of the size of each partition replica that will be moved between disks on the same broker. Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see inter-broker replica moves ). n excluded topics The number of topics excluded from the calculation of partition replica/leader movements in the optimization proposal. You can exclude topics in one of the following ways: In the cruisecontrol.properties file, specify a regular expression in the topics.excluded.from.partition.movement property. In a POST request to the /rebalance endpoint, specify a regular expression in the excluded_topics parameter. Topics that match the regular expression are listed in the response and will be excluded from the cluster rebalance. n leadership moves n : The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. n recent windows n : The number of metrics windows upon which the optimization proposal is based. n% of the partitions covered n% : The percentage of partitions in the Kafka cluster covered by the optimization proposal. On-demand Balancedness Score Before (nn.yyy) After (nn.yyy) Measurements of the overall balance of a Kafka Cluster. Cruise Control assigns a Balancedness Score to every optimization goal based on several factors, including priority (the goal's position in the list of default.goals or user-provided goals). The On-demand Balancedness Score is calculated by subtracting the sum of the Balancedness Score of each violated soft goal from 100. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. Additional resources Section 15.5, "Optimization goals overview" . Section 15.9, "Generating optimization proposals" Section 15.10, "Initiating a cluster rebalance" 15.7. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. Partition reassignment commands Optimization proposals are composed of separate partition reassignment commands. When you initiate a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement : Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement : Involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. To configure partition reassignment commands, see Rebalance tuning options . Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which applies the commands in the order in which they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides three alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. To configure replica movement strategies, see Rebalance tuning options . Rebalance tuning options Cruise Control provides several configuration options for tuning rebalance parameters. These options are set in the following ways: As properties, in the default Cruise Control configuration, in the cruisecontrol.properties file As parameters in POST requests to the /rebalance endpoint The relevant configurations for both methods are summarized in the following table. Table 15.3. Rebalance performance tuning configuration Property and request parameter configurations Description Default Value num.concurrent.partition.movements.per.broker The maximum number of inter-broker partition movements in each partition reassignment batch 5 concurrent_partition_movements_per_broker num.concurrent.intra.broker.partition.movements The maximum number of intra-broker partition movements in each partition reassignment batch 2 concurrent_intra_broker_partition_movements num.concurrent.leader.movements The maximum number of partition leadership changes in each partition reassignment batch 1000 concurrent_leader_movements default.replication.throttle The bandwidth (in bytes per second) to assign to partition reassignment Null (no limit) replication_throttle default.replica.movement.strategies The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. There are three strategies: PrioritizeSmallReplicaMovementStrategy , PrioritizeLargeReplicaMovementStrategy , and PostponeUrpReplicaMovementStrategy . For the property, use a comma-separated list of the fully qualified names of the strategy classes (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the parameter, use a comma-separated list of the class names of the replica movement strategies. BaseReplicaMovementStrategy replica_movement_strategies Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Configurations in the Cruise Control Wiki. REST APIs in the Cruise Control Wiki. 15.8. Cruise Control configuration The config/cruisecontrol.properties file contains the configuration for Cruise Control. The file consists of properties in one of the following types: String Number Boolean You can specify and configure all the properties listed in the Configurations section of the Cruise Control Wiki. Capacity configuration Cruise Control uses capacity limits to determine if certain resource-based optimization goals are being broken. An attempted optimization fails if one or more of these resource-based goals is set as a hard goal and then broken. This prevents the optimization from being used to generate an optimization proposal. You specify capacity limits for Kafka broker resources in one of the following three .json files in cruise-control/config : capacityJBOD.json : For use in JBOD Kafka deployments (the default file). capacity.json : For use in non-JBOD Kafka deployments where each broker has the same number of CPU cores. capacityCores.json : For use in non-JBOD Kafka deployments where each broker has varying numbers of CPU cores. Set the file in the capacity.config.file property in cruisecontrol.properties . The selected file will be used for broker capacity resolution. For example: Capacity limits can be set for the following broker resources in the described units: DISK : Disk storage in MB CPU : CPU utilization as a percentage (0-100) or as a number of cores NW_IN : Inbound network throughput in KB per second NW_OUT : Outbound network throughput in KB per second To apply the same capacity limits to every broker monitored by Cruise Control, set capacity limits for broker ID -1 . To set different capacity limits for individual brokers, specify each broker ID and its capacity configuration. Example capacity limits configuration { "brokerCapacities":[ { "brokerId": "-1", "capacity": { "DISK": "100000", "CPU": "100", "NW_IN": "10000", "NW_OUT": "10000" }, "doc": "This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB." }, { "brokerId": "0", "capacity": { "DISK": "500000", "CPU": "100", "NW_IN": "50000", "NW_OUT": "50000" }, "doc": "This overrides the capacity for broker 0." } ] } For more information, see Populating the Capacity Configuration File in the Cruise Control Wiki. Log cleanup policy for Cruise Control Metrics topic It is important that the auto-created __CruiseControlMetrics topic (see auto-created topics ) has a log cleanup policy of DELETE rather than COMPACT . Otherwise, records that are needed by Cruise Control might be removed. As described in Section 15.3, "Deploying the Cruise Control Metrics Reporter" , setting the following options in the Kafka configuration file ensures that the COMPACT log cleanup policy is correctly set: cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1 If topic auto-creation is disabled in the Cruise Control Metrics Reporter ( cruise.control.metrics.topic.auto.create=false ), but enabled in the Kafka cluster, then the __CruiseControlMetrics topic is still automatically created by the broker. In this case, you must change the log cleanup policy of the __CruiseControlMetrics topic to DELETE using the kafka-configs.sh tool. Get the current configuration of the __CruiseControlMetrics topic: bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --describe Change the log cleanup policy in the topic configuration: bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete If topic auto-creation is disabled in both the Cruise Control Metrics Reporter and the Kafka cluster, you must create the __CruiseControlMetrics topic manually and then configure it to use the DELETE log cleanup policy using the kafka-configs.sh tool. For more information, see Section 5.9, "Modifying a topic configuration" . Logging configuration Cruise Control uses log4j1 for all server logging. To change the default configuration, edit the log4j.properties file in /opt/cruise-control/config/log4j.properties . You must restart the Cruise Control server before the changes take effect. 15.9. Generating optimization proposals When you make a POST request to the /rebalance endpoint, Cruise Control generates an optimization proposal to rebalance the Kafka cluster, based on the provided optimization goals. The optimization proposal is generated as a dry run , unless the dryrun parameter is supplied and set to false . You can then analyze the information in the dry run optimization proposal and decide whether to initiate it. Following are the key parameters that you can include in requests to the /rebalance endpoint. For information about all the available parameters, see REST APIs in the Cruise Control Wiki. dryrun type: boolean, default: true Informs Cruise Control whether you want to generate an optimization proposal only ( true ), or generate an optimization proposal and perform a cluster rebalance ( false ). excluded_topics type: regex A regular expression that matches the topics to exclude from the calculation of the optimization proposal. goals type: list of strings, default: the configured default.goals list List of user-provided optimization goals to use to prepare the optimization proposal. If goals are not supplied, the configured default.goals list in the cruisecontrol.properties file is used. skip_hard_goals_check type: boolean, default: false By default, Cruise Control checks that the user-provided optimization goals (in the goals parameter) contain all the configured hard goals (in hard.goals ). A request fails if you supply goals that are not a subset of the configured hard.goals . Set skip_hard_goals_check to true if you want to generate an optimization proposal with user-provided optimization goals that do not include all the configured hard.goals . json type: boolean, default: false Controls the type of response returned by the Cruise Control server. If not supplied, or set to false , then Cruise Control returns text formatted for display on the command line. If you want to extract elements of the returned information programmatically, set json=true . This will return JSON formatted text that can be piped to tools such as jq , or parsed in scripts and programs. verbose type: boolean, default: false Controls the level of detail in responses that are returned by the Cruise Control server. Prerequisites Kafka and ZooKeeper are running Cruise Control is running Procedure To generate an optimization proposal formatted for the console, send a POST request to the /rebalance endpoint. To use the configured default.goals : curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' The cached optimization proposal is immediately returned. Note If NotEnoughValidWindows is returned, Cruise Control has not yet recorded enough metrics data to generate an optimization proposal. Wait a few minutes and then resend the request. To specify user-provided optimization goals instead of the configured default.goals , supply one or more goals in the goals parameter: curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal' If it satisfies the supplied goals, the cached optimization proposal is immediately returned. Otherwise, a new optimization proposal is generated using the supplied goals; this takes longer to calculate. You can enforce this behavior by adding the ignore_proposal_cache=true parameter to the request. To specify user-provided optimization goals that do not include all the configured hard goals, add the skip_hard_goal_check=true parameter to the request: curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' Review the optimization proposal contained in the response. The properties describe the pending cluster rebalance operation. The proposal contains a high level summary of the proposed optimization, followed by summaries for each default optimization goal, and the expected cluster state after the proposal has executed. Pay particular attention to the following information: The Cluster load after rebalance summary. If it meets your requirements, you should assess the impact of the proposed changes using the high level summary. n inter-broker replica (y MB) moves indicates how much data will be moved across the network between brokers. The higher the value, the greater the potential performance impact on the Kafka cluster during the rebalance. n intra-broker replica (y MB) moves indicates how much data will be moved within the brokers themselves (between disks). The higher the value, the greater the potential performance impact on individual brokers (although less than that of n inter-broker replica (y MB) moves ). The number of leadership moves. This has a negligible impact on the performance of the cluster during the rebalance. Asynchronous responses The Cruise Control REST API endpoints timeout after 10 seconds by default, although proposal generation continues on the server. A timeout might occur if the most recent cached optimization proposal is not ready, or if user-provided optimization goals were specified with ignore_proposal_cache=true . To allow you to retrieve the optimization proposal at a later time, take note of the request's unique identifier, which is given in the header of responses from the /rebalance endpoint. To obtain the response using curl , specify the verbose ( -v ) option: Here is an example header: * Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001) If an optimization proposal is not ready within the timeout, you can re-submit the POST request, this time including the User-Task-ID of the original request in the header: curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance' What to do Section 15.10, "Initiating a cluster rebalance" 15.10. Initiating a cluster rebalance If you are satisfied with an optimization proposal, you can instruct Cruise Control to initiate the cluster rebalance and begin reassigning partitions, as summarized in the proposal. Leave as little time as possible between generating an optimization proposal and initiating the cluster rebalance. If some time has passed since you generated the original optimization proposal, the cluster state might have changed. Therefore, the cluster rebalance that is initiated might be different to the one you reviewed. If in doubt, first generate a new optimization proposal. Only one cluster rebalance, with a status of "Active", can be in progress at a time. Prerequisites You have generated an optimization proposal from Cruise Control. Procedure To execute the most recently generated optimization proposal, send a POST request to the /rebalance endpoint, with the dryrun=false parameter: curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false' Cruise Control initiates the cluster rebalance and returns the optimization proposal. Check the changes that are summarized in the optimization proposal. If the changes are not what you expect, you can stop the rebalance . Check the progress of the cluster rebalance using the /user_tasks endpoint. The cluster rebalance in progress has a status of "Active". To view all cluster rebalance tasks executed on the Cruise Control server: curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC To view the status of a particular cluster rebalance task, supply the user-task-ids parameter and the task ID: 15.11. Stopping an active cluster rebalance You can stop the cluster rebalance that is currently in progress. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to before the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites A cluster rebalance is in progress (indicated by a status of "Active"). Procedure Send a POST request to the /stop_proposal_execution endpoint: Additional resources Section 15.9, "Generating optimization proposals"
|
[
"sudo mkdir /opt/cruise-control",
"unzip amq-streams-y.y.y-cruise-control-bin.zip -d /opt/cruise-control",
"sudo chown -R kafka:kafka /opt/cruise-control",
"/opt/kafka/bin/kafka-server-stop.sh",
"cp /opt/cruise-control/libs/ cruise-control-metrics-reporter-y.y.yyy.redhat-0000x.jar /opt/kafka/libs",
"metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter",
"cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1",
"/opt/kafka/bin/kafka-server-start.sh",
"The Kafka cluster to control. bootstrap.servers=localhost:9092 1 The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 The list of supported goals goals={list of master optimization goals} 5 The list of supported hard goals hard.goals={List of hard goals} 6 How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8",
"cd /opt/cruise-control/ ./bin/cruise-control-start.sh config/cruisecontrol.properties PORT",
"curl 'http://HOST:PORT/kafkacruisecontrol/state'",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal",
"RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal",
"capacity.config.file=config/capacityJBOD.json",
"{ \"brokerCapacities\":[ { \"brokerId\": \"-1\", \"capacity\": { \"DISK\": \"100000\", \"CPU\": \"100\", \"NW_IN\": \"10000\", \"NW_OUT\": \"10000\" }, \"doc\": \"This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB.\" }, { \"brokerId\": \"0\", \"capacity\": { \"DISK\": \"500000\", \"CPU\": \"100\", \"NW_IN\": \"50000\", \"NW_OUT\": \"50000\" }, \"doc\": \"This overrides the capacity for broker 0.\" } ] }",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --describe",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'",
"curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"* Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001)",
"curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance'",
"curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false'",
"curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC",
"curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks?user_task_ids=c459316f-9eb5-482f-9d2d-97b5a4cd294d'",
"curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/stop_proposal_execution'"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/assembly-cc-cluster-rebalancing-str
|
Chapter 2. Scaling
|
Chapter 2. Scaling After starting Red Hat build of Keycloak, consider adapting your instance to the required load using these scaling and tuning guidelines: minimize resource utilization achieve target response times minimize database pool contention resolve out of memory errors, or excessive garbage collection overhead provide higher availability via horizontal scaling 2.1. Vertical Scaling As you monitor your Red Hat build of Keycloak workload, check to see if the CPU or memory is under or over utilized. Consult Concepts for sizing CPU and memory resources to better tune the resources available to the Java Virtual Machine (JVM). Before increasing the amount of memory available to the JVM, in particular when experiencing an out of memory error, it is best to determine what is contributing to the increased footprint using a heap dump. Excessive response times may also indicate the HTTP work queue is too large and tuning for load shedding would be better than simply providing more memory. See the following section. 2.1.1. Common Tuning Options Red Hat build of Keycloak automatically adjusts the number of used threads based upon how many cores you make available. Manually changing the thread count can improve overall throughput. For more details, see Concepts for configuring thread pools . However, changing the thread count must be done in conjunction with other JVM resources, such as database connections; otherwise, you may be moving a bottleneck somewhere else. For more details, see Concepts for database connection pools . To limit memory utilization of queued work and to provide for load shedding, see Concepts for configuring thread pools . If you are experiencing timeouts in obtaining database connections, you should consider increasing the number of connections available. For more details, see Concepts for database connection pools . 2.1.2. Vertical Autoscaling Some platforms, such as Kubernetes, provide mechanisms to vertically autoscale. Vertical autoscaling is not recommended for Red Hat build of Keycloak if it requires restarting the server instance, which is currently the case for Java on Kubernetes. You can consider instead providing higher CPU and/or memory limits to allow your JVM to adapt within those limits as needed. 2.2. Horizontal Scaling A single Red Hat build of Keycloak instance is susceptible to availability issues. If the instance goes down, you experience a full outage until another instance comes up. By running two or more cluster members on different machines, you greatly increase the availability of Red Hat build of Keycloak. A single JVM has a limit on how many concurrent requests it can handle. Additional server instances can provide roughly linear scaling of throughput until associated resources, such as the database or distributed caching, limit that scaling. In general, consider allowing the Red Hat build of Keycloak Operator to handle horizontal scaling concerns. When using the Operator, set the Keycloak custom resource spec.instances as desired to horizontally scale. For more details, see Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator . If you are not using the Operator, please review the following: Higher availability is possible of your instances are on separate machines. On Kubernetes, use Pod anti-affinitity to enforce this. Use distributed caching; for multi-site clusters, use external caching for cluster members to share the same state. For details on the relevant configuration, see Configuring distributed caches . The embedded Infinispan cache has horizontal scaling considerations including: Your instances need a way to discover each other. For more information, see discovery in Configuring distributed caches . This cache is not optimal for clusters that span multiple availability zones, which are also called stretch clusters. For embedded Infinispan cache, work to have all instances in one availability zone. The goal is to avoid unnecessary round-trips in the communication that would amplify in the response times. On Kubernetes, use Pod affinity to enforce this grouping of Pods. This cache does not gracefully handle multiple members joining or leaving concurrently. In particular, members leaving at the same time can lead to data loss. On Kubernetes, you can use a StatefulSet with the default serial handling to ensure Pods are started and stopped sequentially. To avoid losing service availability when a whole site is unavailable, see the high availability guide for more information on a multi-site deployment. See Multi-site deployments . 2.2.1. Horizontal Autoscaling Horizontal autoscaling allows for adding or removing Red Hat build of Keycloak instances on demand. Keep in mind that startup times will not be instantaneous and that optimized images should be used to minimize the start time. When using the embedded Infinispan cache cluster, dynamically adding or removing cluster members requires Infinispan to perform a rebalancing of the Infinispan caches, which can get expensive if many entries exist in those caches. To minimize this time we limit number of entries in session related caches to 10000 by default. Note, this optimization is possible only if persistent-user-sessions feature is not explicitly disabled in your configuration. On Kubernetes, the Keycloak custom resource is scalable meaning that it can be targeted by the built-in autoscaler .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/getting_started_guide/getting-started-scaling-and-tuning-
|
Chapter 3. Viewing your subscription inventory
|
Chapter 3. Viewing your subscription inventory The All subscriptions table on the Subscription Inventory page contains detailed information about all of the subscriptions that your organization owns for an account. The account number is displayed in the table title. For example, if your account number is 12345, the All subscriptions table title displays as All subscriptions for account 12345 . The table columns show the following information for each subscription in your account: Name The name of the subscription and the product it applies to. SKU The combination of letters and numbers that represent the subscription stock keeping unit (SKU). Quantity The number of active subscriptions for a SKU. Service level The type of service level agreement (SLA) for the subscription, as defined within the terms of the contract. Examples include Premium, Standard, Self-Support or Layered. Prerequisites You are logged in to your Red Hat Hybrid Cloud Console account at console.redhat.com. You have the Subscriptions user role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To view your subscription inventory, perform the following steps: From the Home page, locate Red Hat Insights and click RHEL . From the Red Hat Insights menu, click Business > Subscriptions > Subscription Inventory . From the Subscriptions Inventory page, scroll to the All subscriptions table. Note The table displays up to 10 rows per page by default. You can customize the table to show up to 100 rows per page. Optional: Click the rows per page arrow to select the number of rows that you want to view on each page. Note Your subscription inventory might span more than one page. Use the and arrows to move between pages, if applicable. 3.1. Filtering by subscription status You can use the tiles to filter the All subscriptions table by subscription status (Active, Expired, Expiring soon, or Future dated). To filter your subscriptions by status, click a tile. For example, clicking the Active tile displays only active subscriptions in the table. The applied filter displays in the table header. To clear the filter, click Clear filters or close the applied filter in the table header. Note Currently, you can apply only one status filter at a time. However, you can use a status filter in conjunction with the search filter in the All subscriptions table. 3.2. Filtering by subscription name or SKU You can use the search bar in the table header to filter the subscriptions in your inventory by name or SKU. For example, if you type "A" into the search bar, the table shows only the subscriptions in your inventory that contain the letter "A" in either the Name or SKU field. Note The search is not case sensitive. Optionally, you can use the search filter in conjunction with the tile filter to filter only subscriptions of the selected status by name or SKU. For example, you can apply the Active tile filter and then type a character into the search bar to view only active subscriptions that contain the typed character in the name, the SKU, or both.
| null |
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/viewing_and_managing_your_subscription_inventory_on_the_hybrid_cloud_console/proc-viewing-sub-inventory
|
6.3. Colocation of Resources
|
6.3. Colocation of Resources A colocation constraint determines that the location of one resource depends on the location of another resource. There is an important side effect of creating a colocation constraint between two resources: it affects the order in which resources are assigned to a node. This is because you cannot place resource A relative to resource B unless you know where resource B is. So when you are creating colocation constraints, it is important to consider whether you should colocate resource A with resource B or resource B with resource A. Another thing to keep in mind when creating colocation constraints is that, assuming resource A is collocated with resource B, the cluster will also take into account resource A's preferences when deciding which node to choose for resource B. The following command creates a colocation constraint. For information on master and slave resources, see Section 8.2, "Multi-State Resources: Resources That Have Multiple Modes" . Table 6.3, "Properties of a Colocation Constraint" . summarizes the properties and options for configuring colocation constraints. Table 6.3. Properties of a Colocation Constraint Field Description source_resource The colocation source. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. target_resource The colocation target. The cluster will decide where to put this resource first and then decide where to put the source resource. score Positive values indicate the resource should run on the same node. Negative values indicate the resources should not run on the same node. A value of + INFINITY , the default value, indicates that the source_resource must run on the same node as the target_resource . A value of - INFINITY indicates that the source_resource must not run on the same node as the target_resource . 6.3.1. Mandatory Placement Mandatory placement occurs any time the constraint's score is +INFINITY or -INFINITY . In such cases, if the constraint cannot be satisfied, then the source_resource is not permitted to run. For score=INFINITY , this includes cases where the target_resource is not active. If you need myresource1 to always run on the same machine as myresource2 , you would add the following constraint: Because INFINITY was used, if myresource2 cannot run on any of the cluster nodes (for whatever reason) then myresource1 will not be allowed to run. Alternatively, you may want to configure the opposite, a cluster in which myresource1 cannot run on the same machine as myresource2 . In this case use score=-INFINITY Again, by specifying -INFINITY , the constraint is binding. So if the only place left to run is where myresource2 already is, then myresource1 may not run anywhere.
|
[
"pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]",
"pcs constraint colocation add myresource1 with myresource2 score=INFINITY",
"pcs constraint colocation add myresource1 with myresource2 score=-INFINITY"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-colocationconstraints-haar
|
function::print_ubacktrace_brief
|
function::print_ubacktrace_brief Name function::print_ubacktrace_brief - Print stack back trace for current user-space task. Synopsis Arguments None Description Equivalent to print_ubacktrace , but output for each symbol is shorter (just name and offset, or just the hex address of no symbol could be found). Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
|
[
"print_ubacktrace_brief()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-print-ubacktrace-brief
|
Chapter 6. Jobs
|
Chapter 6. Jobs Troubleshoot issues with jobs. 6.1. Issue - Jobs are failing with "ERROR! couldn't resolve module/action" error message Jobs are failing with the error message "ERROR! couldn't resolve module/action 'module name'. This often indicates a misspelling, missing collection, or incorrect module path". This error can happen when the collection associated with the module is missing from the execution environment. The recommended resolution is to create a custom execution environment and add the required collections inside of that execution environment. For more information about creating an execution environment, see Using Ansible Builder in Creating and using execution environments . Alternatively, you can complete the following steps: Procedure Create a collections folder inside of the project repository. Add a requirements.yml file inside of the collections folder and add the collection: collections: - <collection_name> 6.2. Issue - Jobs are failing with "Timeout (12s) waiting for privilege escalation prompt" error message This error can happen when the timeout value is too small, causing the job to stop before completion. The default timeout value for connection plugins is 10 . To resolve the issue, increase the timeout value by completing one of the following procedures. Note The following changes will affect all of the jobs in automation controller. To use a timeout value for a specific project, add an ansible.cfg file in the root of the project directory and add the timeout parameter value to that ansible.cfg file. Add ANSIBLE_TIMEOUT as an environment variable in the automation controller UI Go to automation controller. From the navigation panel, select Settings Jobs settings . Under Extra Environment Variables add the following: { "ANSIBLE_TIMEOUT": 60 } Add a timeout value in the [defaults] section of the ansible.cfg file by using the CLI Edit the /etc/ansible/ansible.cfg file and add the following: [defaults] timeout = 60 Running ad hoc commands with a timeout To run an ad hoc playbook in the command line, add the --timeout flag to the ansible-playbook command, for example: # ansible-playbook --timeout=60 <your_playbook.yml> Additional resources For more information about the DEFAULT_TIMEOUT configuration setting, see DEFAULT_TIMEOUT in the Ansible Community Documentation. 6.3. Issue - Jobs in automation controller are stuck in a pending state After launching jobs in automation controller, the jobs stay in a pending state and do not start. There are a few reasons jobs can become stuck in a pending state. For more information about troubleshooting this issue, see Playbook stays in pending in Configuring automation execution Cancel all pending jobs Run the following commands to list all of the pending jobs: # awx-manage shell_plus >>> UnifiedJob.objects.filter(status='pending') Run the following command to cancel all of the pending jobs: >>> UnifiedJob.objects.filter(status='pending').update(status='canceled') Cancel a single job by using a job id To cancel a specific job, run the following commands, replacing <job_id> with the job id to cancel: # awx-manage shell_plus >>> UnifiedJob.objects.filter(id=_<job_id>_).update(status='canceled') 6.4. Issue - Jobs in private automation hub are failing with "denied: requested access to the resource is denied, unauthorized: Insufficient permissions" error message Jobs are failing with the error message "denied: requested access to the resource is denied, unauthorized: Insufficient permissions" when using an execution environment in private automation hub. This issue happens when your private automation hub is protected with a password or token and the registry credential is not assigned to the execution environment. Procedure Go to automation controller. From the navigation panel, select Administration Execution Environments . Click the execution environment assigned to the job template that is failing. Click Edit . Assign the appropriate Registry credential from your private automation hub to the execution environment. Additional resources For information about creating new credentials in automation controller, see Creating new credentials in Using automation execution .
|
[
"collections: - <collection_name>",
"{ \"ANSIBLE_TIMEOUT\": 60 }",
"[defaults] timeout = 60",
"ansible-playbook --timeout=60 <your_playbook.yml>",
"awx-manage shell_plus",
">>> UnifiedJob.objects.filter(status='pending')",
">>> UnifiedJob.objects.filter(status='pending').update(status='canceled')",
"awx-manage shell_plus",
">>> UnifiedJob.objects.filter(id=_<job_id>_).update(status='canceled')"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/troubleshoot-jobs
|
7.62. glibc
|
7.62. glibc 7.62.1. RHBA-2015:1286 - glibc bug fix and enhancement update Updated glibc packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The glibc packages provide the standard C libraries (libc), POSIX thread libraries (libpthread), standard math libraries (libm), and the name server cache daemon (nscd) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Bug Fixes BZ# 859965 This update of the name service cache daemon (nscd) adds a system of inotify-based monitoring and stat-based backup monitoring for nscd configuration files, so that nscd now correctly detects changes to its configuration and reloads the data. This prevents nscd from returning stale data. BZ# 1085312 A defect in the library could cause the list of returned netgroups to be truncated if one of the netgroups in the tree was empty. This error could result in application crashes or undefined behavior. The library has been fixed to handle empty netgroups correctly and to return the complete list of requested netgroups. BZ# 1088301 The gethostby* functions generated syslog messages for every unrecognized record type, even if the resolver options explicitly selected extra data. The library has been fixed to avoid generating logging messages when the user explicitly or implicitly requested the data. The number of syslog messages in DNSSEC-enabled systems related to calls to gethostby* is now reduced. BZ# 1091915 A defect in glibc could cause uninitialized bytes to be sent via a socket between the nscd client and server. When the application was analyzed using Valgrind, it reported a problem which could be confusing and misleading. The library has been fixed to initialize all bytes sent via the socket operation. Valgrind no longer reports problems with the nscd client. BZ# 1116050 A defect in the reinitialization of thread local structures could result in a too-small thread local storage structure which could lead to unexpected termination of an application. The thread library has been fixed to reinitialize the thread local storage structure correctly to prevent applications from crashing when they reuse thread stacks. BZ# 1124204 The times function provided by glibc did not allow users to use a NULL value for the buffer, and applications passing a NULL could terminate unexpectedly. The library has been fixed to accept a NULL value for the buffer and return the expected results from the kernel system call. BZ# 1138769 The getaddrinfo(3) function has been improved to return a valid response when an address lookup using the getaddrinfo(3) function for AF_UNSPEC is performed on a defective DNS server. BZ# 1159167 When using NetApp filers as NFS servers, the rpc.statd service could terminate unexpectedly. The glibc API segmentation violation in the server Remote Procedure Call (RPC) code that was causing this crash has been corrected, and the problem no longer occurs. BZ# 1217186 When a system with a large .rhosts file used the rsh shell to connect to a rlogind server, the authentication could time out. This update adjusts the ruserok(3) function, so that it first performs user matching in order to avoid demanding DNS lookups. As a result, rlogind authentication with large .rhosts files is faster and no longer times out. Enhancements BZ# 1154563 The dlopen(3) function of the library, which is used to load dynamic libraries, can now be called recursively (a dlopen(3) function can be called while another dlopen(3) function is already in process). This update prevents crashes or aborts in applications that need to use the dlopen(3) function in this way. BZ# 1195453 The glibc dynamic loader now supports Intel AVX-512 extensions. This update allows the dynamic loader to save and restore AVX-512 registers as required, thus preventing AVX-512-enabled applications from failing because of audit modules that also use AVX-512. Users of glibc are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-glibc
|
10.3.3. Establishing a Mobile Broadband Connection
|
10.3.3. Establishing a Mobile Broadband Connection You can use NetworkManager 's mobile broadband connection abilities to connect to the following 2G and 3G services: 2G - GPRS ( General Packet Radio Service ) or EDGE ( Enhanced Data Rates for GSM Evolution ) 3G - UMTS ( Universal Mobile Telecommunications System ) or HSPA ( High Speed Packet Access ) Your computer must have a mobile broadband device (modem), which the system has discovered and recognized, in order to create the connection. Such a device may be built into your computer (as is the case on many notebooks and netbooks), or may be provided separately as internal or external hardware. Examples include PC card, USB Modem or Dongle, mobile or cellular telephone capable of acting as a modem. Procedure 10.3. Adding a New Mobile Broadband Connection You can configure a mobile broadband connection by opening the Network Connections window, clicking Add , and selecting Mobile Broadband from the list. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button to open the selection list. Select Mobile Broadband and then click Create . The Set up a Mobile Broadband Connection assistant appears. Under Create a connection for this mobile broadband device , choose the 2G- or 3G-capable device you want to use with the connection. If the dropdown menu is inactive, this indicates that the system was unable to detect a device capable of mobile broadband. In this case, click Cancel , ensure that you do have a mobile broadband-capable device attached and recognized by the computer and then retry this procedure. Click the Forward button. Select the country where your service provider is located from the list and click the Forward button. Select your provider from the list or enter it manually. Click the Forward button. Select your payment plan from the dropdown menu and confirm the Access Point Name ( APN ) is correct. Click the Forward button. Review and confirm the settings and then click the Apply button. Edit the mobile broadband-specific settings by referring to the Configuring the Mobile Broadband Tab description below . Procedure 10.4. Editing an Existing Mobile Broadband Connection Follow these steps to edit an existing mobile broadband connection. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Select the connection you want to edit and click the Edit button. Select the Mobile Broadband tab. Configure the connection name, auto-connect behavior, and availability settings. Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the Mobile Broadband section of the Network Connections window. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Edit the mobile broadband-specific settings by referring to the Configuring the Mobile Broadband Tab description below . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your mobile broadband connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: Point-to-point settings for the connection, click the PPP Settings tab and proceed to Section 10.3.9.3, "Configuring PPP (Point-to-Point) Settings" ; IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 10.3.9.5, "Configuring IPv6 Settings" . Configuring the Mobile Broadband Tab If you have already added a new mobile broadband connection using the assistant (see Procedure 10.3, "Adding a New Mobile Broadband Connection" for instructions), you can edit the Mobile Broadband tab to disable roaming if home network is not available, assign a network ID, or instruct NetworkManager to prefer a certain technology (such as 3G or 2G) when using the connection. Number The number that is dialed to establish a PPP connection with the GSM-based mobile broadband network. This field may be automatically populated during the initial installation of the broadband device. You can usually leave this field blank and enter the APN instead. Username Enter the user name used to authenticate with the network. Some providers do not provide a user name, or accept any user name when connecting to the network. Password Enter the password used to authenticate with the network. Some providers do not provide a password, or accept any password. APN Enter the Access Point Name ( APN ) used to establish a connection with the GSM-based network. Entering the correct APN for a connection is important because it often determines: how the user is billed for their network usage; and/or whether the user has access to the Internet, an intranet, or a subnetwork. Network ID Entering a Network ID causes NetworkManager to force the device to register only to a specific network. This can be used to ensure the connection does not roam when it is not possible to control roaming directly. Type Any - The default value of Any leaves the modem to select the fastest network. 3G (UMTS/HSPA) - Force the connection to use only 3G network technologies. 2G (GPRS/EDGE) - Force the connection to use only 2G network technologies. Prefer 3G (UMTS/HSPA) - First attempt to connect using a 3G technology such as HSPA or UMTS, and fall back to GPRS or EDGE only upon failure. Prefer 2G (GPRS/EDGE) - First attempt to connect using a 2G technology such as GPRS or EDGE, and fall back to HSPA or UMTS only upon failure. Allow roaming if home network is not available Uncheck this box if you want NetworkManager to terminate the connection rather than transition from the home network to a roaming one, thereby avoiding possible roaming charges. If the box is checked, NetworkManager will attempt to maintain a good connection by transitioning from the home network to a roaming one, and vice versa. PIN If your device's SIM ( Subscriber Identity Module ) is locked with a PIN ( Personal Identification Number ), enter the PIN so that NetworkManager can unlock the device. NetworkManager must unlock the SIM if a PIN is required in order to use the device for any purpose.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-establishing_a_mobile_broadband_connection
|
6.2. Plan and Configure Security Updates
|
6.2. Plan and Configure Security Updates All software contains bugs. Often, these bugs can result in a vulnerability that can expose your system to malicious users. Unpatched systems are a common cause of computer intrusions. You should have a plan to install security patches in a timely manner to close those vulnerabilities so they cannot be exploited. For home users, security updates should be installed as soon as possible. Configuring automatic installation of security updates is one way to avoid having to remember, but does carry a slight risk that something can cause a conflict with your configuration or with other software on the system. For business or advanced home users, security updates should be tested and scheduled for installation. Additional controls will need to be used to protect the system during the time between the patch release and its installation on the system. These controls would depend on the exact vulnerability, but could include additional firewall rules, the use of external firewalls, or changes in software settings.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-software_maintenance-plan_and_configure_security_updates
|
Installing an on-premise cluster with the Agent-based Installer
|
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.13 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
|
[
"apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5",
"- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"",
"oc adm release mirror",
"To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release",
"spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"",
"mkdir ~/<directory_name>",
"cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF",
"cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF",
"openshift-install --dir <install_directory> agent create image",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2",
"................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete",
"openshift-install --dir <install_directory> agent wait-for install-complete 1",
"................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com",
"./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug",
"ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded",
"ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz",
"./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz <must_gather_directory>",
"sudo dnf install /usr/bin/nmstatectl -y",
"mkdir ~/<directory_name>",
"cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 4 sshKey: '<ssh_pub_key>' 5 EOF",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF",
"openshift-install agent create cluster-manifests --dir <installation_directory>",
"cd <installation_directory>/cluster-manifests",
"cd ../mirror",
"openshift-install --dir <install_directory> agent create image",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2",
"................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete",
"openshift-install --dir <install_directory> agent wait-for install-complete 1",
"................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com",
"apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.13 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key>",
"apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.13 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2022-06-06-025509",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" macAddress: 52:54:01:aa:aa:a1",
"apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret>",
"./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug",
"ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded",
"ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz",
"./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz <must_gather_directory>",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.13 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8",
"oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>",
"imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------",
"openshift-install agent create cluster-manifests",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"<assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml",
"openshift-install agent create image --dir <assets_directory>",
"openshift-install agent wait-for install-complete --dir <assets_directory>",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem",
"oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m",
"The `devicePath` is an example and may vary depending on the actual hardware configuration used.",
"apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.13\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true",
"oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m",
"oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_an_on-premise_cluster_with_the_agent-based_installer/index
|
Appendix A. Reference Material
|
Appendix A. Reference Material A.1. Example wildfly-config.xml The wildlfly-config.xml file is one way for clients to use Elytron Client, which allows clients to use security information when making connections to JBoss EAP. Example: custom-config.xml <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <authentication-rules> <rule use-configuration="monitor"> <match-host name="127.0.0.1" /> </rule> <rule use-configuration="administrator"> <match-host name="localhost" /> </rule> </authentication-rules> <authentication-configurations> <configuration name="monitor"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="monitor" /> <credentials> <clear-password password="password1!" /> </credentials> <set-mechanism-realm name="ManagementRealm" /> </configuration> <configuration name="administrator"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="administrator" /> <credentials> <clear-password password="password1!" /> </credentials> <set-mechanism-realm name="ManagementRealm" /> </configuration> </authentication-configurations> <net-authenticator/> <!-- This decides which SSL context configuration to use --> <ssl-context-rules> <rule use-ssl-context="mycorp-client"> <match-host name="mycorp.com"/> </rule> </ssl-context-rules> <ssl-contexts> <default-ssl-context name="mycorp-context"/> <ssl-context name="mycorp-context"> <key-store-ssl-certificate key-store-name="store1" alias="mycorp-client-certificate"/> <!-- This is an OpenSSL-style cipher suite selection string; this example is the expanded form of DEFAULT to illustrate the format --> <cipher-suite selector="ALL:!EXPORT:!LOW:!aNULL:!eNULL:!SSLv2"/> <protocol names="TLSv1.2"/> </ssl-context> </ssl-contexts> </authentication-client> </configuration> Additional resources For more details on using Elytron Client, see Configure client authentication with Elytron Client . For more information about how to configure clients using the wildfly-config.xml file, see Client Configuration Using the wildfly-config.xml File . A.2. Single Sign-on attributes A Single Sign-on (SSO) authentication mechanism configuration. The following table provides attribute descriptions for the setting=single-sign-on resource of the application-security-domain in the undertow subsystem. A.2.1. Single Sign-on Table A.1. single-sign-on Attributes Attribute Description client-ssl-context The reference to the SSL context used to secure back-channel logout connection. cookie-name The name of the cookie. The default value is JSESSIONIDSSO . credential-reference The credential reference to decrypt the private key entry. credential-reference has the following attributes: alias : The alias which denotes stored secret or credential in the store. clear-text : The secret specified using clear text. Checks the credential store way of supplying credential or secrets to services. store : The name of the credential store holding the alias to credential. type : The type of credential this reference is denoting. domain The cookie domain to be used. http-only For setting cookie's httpOnly attribute. The default value is false . key-alias The alias of the private key entry used for signing and verifying back-channel logout connection. key-store The reference to keystore containing a private key entry. path The cookie path. The default value is / . secure For setting cookie's secure attribute. The default value is false . Additional resources For more information about using a client-ssl-context , see Using a client-ssl-context . For more information about a credential-store , see Credential store in Elytron . For more information about how to create a key-store , see Create an Elytron Keystore . A.3. Password mappers A password mapper constructs a password from multiple fields in a database using one of the following algorithm types: Clear text Simple digest Salted simple digest bcrypt SCRAM Modular crypt A password mapper has the following attributes: Note The index of the first column is 1 for all the mappers. Table A.2. password mapper attributes Mapper name Attributes Encryption method clear-password-mapper password-index The index of the column containing the clear text password. No encryption. simple-digest password-index The index of the column containing the password hash. algorithm The hashing algorithm used. The following values are supported: simple-digest-md2 simple-digest-md5 simple-digest-sha-1 simple-digest-sha-256 simple-digest-sha-384 simple-digest-sha-512 hash-encoding Specify the representation hash. Permitted values: base64 (default) hex A simple hashing mechanism is used. salted-simple-digest password-index The index of the column containing the password hash. algorithm The hashing algorithm used. The following values are supported: password-salt-digest-md5 password-salt-digest-sha-1 password-salt-digest-sha-256 password-salt-digest-sha-384 password-salt-digest-sha-512 salt-password-digest-md5 salt-password-digest-sha-1 salt-password-digest-sha-256 salt-password-digest-sha-384 salt-password-digest-sha-512 salt-index Index of the column containing the salt used for hashing. hash-encoding Specify the representation for the hash. Permitted values: base64 (default) hex salt-encoding Specify the representation for the salt. Permitted values: base64 (default) hex A simple hashing mechanism is used with a salt. bcrypt-password-mapper password-index The index of the column containing the password hash. salt-index Index of the column containing the salt used for hashing. iteration-count-index Index of the column containing the number of iterations used. hash-encoding Specify the representation for the hash. Permitted values: base64 (default) hex salt-encoding Specify the representation for the salt. Permitted values: base64 (default) hex Blowfish algorithm used for hashing. scram-mapper password-index The index of the column containing the password hash. algorithm The hashing algorithm used. The following values are supported: scram-sha-1 scram-sha-256 scram-sha-384 scram-sha-512 salt-index Index of the column containing the salt is used for hashing. iteration-count-index Index of the column containing the number of iterations used. hash-encoding Specify the representation for the hash. Permitted values: base64 (default) hex salt-encoding Specify the representation for the salt. Permitted values: base64 (default) hex Salted Challenge Response Authentication mechanism is used for hashing. modular-crypt-mapper password-index The index of the column containing the encrypted password. The modular-crypt encoding allows for multiple pieces of information to be encoded in single string such as the password type, the hash or digest, the salt, and the iteration count. Revised on 2024-01-17 05:25:08 UTC
|
[
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"monitor\"> <match-host name=\"127.0.0.1\" /> </rule> <rule use-configuration=\"administrator\"> <match-host name=\"localhost\" /> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"monitor\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"monitor\" /> <credentials> <clear-password password=\"password1!\" /> </credentials> <set-mechanism-realm name=\"ManagementRealm\" /> </configuration> <configuration name=\"administrator\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"administrator\" /> <credentials> <clear-password password=\"password1!\" /> </credentials> <set-mechanism-realm name=\"ManagementRealm\" /> </configuration> </authentication-configurations> <net-authenticator/> <!-- This decides which SSL context configuration to use --> <ssl-context-rules> <rule use-ssl-context=\"mycorp-client\"> <match-host name=\"mycorp.com\"/> </rule> </ssl-context-rules> <ssl-contexts> <default-ssl-context name=\"mycorp-context\"/> <ssl-context name=\"mycorp-context\"> <key-store-ssl-certificate key-store-name=\"store1\" alias=\"mycorp-client-certificate\"/> <!-- This is an OpenSSL-style cipher suite selection string; this example is the expanded form of DEFAULT to illustrate the format --> <cipher-suite selector=\"ALL:!EXPORT:!LOW:!aNULL:!eNULL:!SSLv2\"/> <protocol names=\"TLSv1.2\"/> </ssl-context> </ssl-contexts> </authentication-client> </configuration>"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_identity_management/reference_material
|
Chapter 4. OADP Application backup and restore
|
Chapter 4. OADP Application backup and restore 4.1. Introduction to OpenShift API for Data Protection The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators. OADP support is provided to customer workload namespaces, and cluster scope resources. Full cluster backup and restore are not supported. 4.1.1. OpenShift API for Data Protection APIs OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources. OADP provides the following APIs: Backup Restore Schedule BackupStorageLocation VolumeSnapshotLocation 4.1.1.1. Support for OpenShift API for Data Protection Table 4.1. Supported versions of OADP Version OCP version General availability Full support ends Maintenance ends Extended Update Support (EUS) Extended Update Support Term 2 (EUS Term 2) 1.4 4.14 4.15 4.16 4.17 10 Jul 2024 Release of 1.5 Release of 1.6 27 Jun 2026 EUS must be on OCP 4.16 27 Jun 2027 EUS Term 2 must be on OCP 4.16 1.3 4.12 4.13 4.14 4.15 29 Nov 2023 10 Jul 2024 Release of 1.5 31 Oct 2025 EUS must be on OCP 4.14 31 Oct 2026 EUS Term 2 must be on OCP 4.14 4.1.1.1.1. Unsupported versions of the OADP Operator Table 4.2. versions of the OADP Operator which are no longer supported Version General availability Full support ended Maintenance ended 1.2 14 Jun 2023 29 Nov 2023 10 Jul 2024 1.1 01 Sep 2022 14 Jun 2023 29 Nov 2023 1.0 09 Feb 2022 01 Sep 2022 14 Jun 2023 For more details about EUS, see Extended Update Support . For more details about EUS Term 2, see Extended Update Support Term 2 . Additional resources Backing up etcd 4.2. OADP release notes 4.2.1. OADP 1.4 release notes The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. Note For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs 4.2.1.1. OADP 1.4.4 release notes OpenShift API for Data Protection (OADP) 1.4.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.3. 4.2.1.1.1. Known issues Issue with restoring stateful applications When you restore a stateful application that uses the azurefile-csi storage class, the restore operation remains in the Finalizing phase. (OADP-5508) 4.2.1.2. OADP 1.4.3 release notes The OpenShift API for Data Protection (OADP) 1.4.3 release notes lists the following new feature. 4.2.1.2.1. New features Notable changes in the kubevirt velero plugin in version 0.7.1 With this release, the kubevirt velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. Object graphs now include all extra objects during backup and restore operations. Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. Switching VM run strategies during restore operations is now possible. Clearing a MAC address by label is now supported. The restore-specific checks during the backup operation are now skipped. The VirtualMachineClusterInstancetype and VirtualMachineClusterPreference custom resource definitions (CRDs) are now supported. 4.2.1.3. OADP 1.4.2 release notes The OpenShift API for Data Protection (OADP) 1.4.2 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.3.1. New features Backing up different volumes in the same namespace by using the VolumePolicy feature is now possible With this release, Velero provides resource policies to back up different volumes in the same namespace by using the VolumePolicy feature. The supported VolumePolicy feature to back up different volumes includes skip , snapshot , and fs-backup actions. OADP-1071 File system backup and data mover can now use short-term credentials File system backup and data mover can now use short-term credentials such as AWS Security Token Service (STS) and GCP WIF. With this support, backup is successfully completed without any PartiallyFailed status. OADP-5095 4.2.1.3.2. Resolved issues DPA now reports errors if VSL contains an incorrect provider value Previously, if the provider of a Volume Snapshot Location (VSL) spec was incorrect, the Data Protection Application (DPA) reconciled successfully. With this update, DPA reports errors and requests for a valid provider value. OADP-5044 Data Mover restore is successful irrespective of using different OADP namespaces for backup and restore Previously, when backup operation was executed by using OADP installed in one namespace but was restored by using OADP installed in a different namespace, the Data Mover restore failed. With this update, Data Mover restore is now successful. OADP-5460 SSE-C backup works with the calculated MD5 of the secret key Previously, backup failed with the following error: Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key. With this update, missing Server-Side Encryption with Customer-Provided Keys (SSE-C) base64 and MD5 hash are now fixed. As a result, SSE-C backup works with the calculated MD5 of the secret key. In addition, incorrect errorhandling for the customerKey size is also fixed. OADP-5388 For a complete list of all issues resolved in this release, see the list of OADP 1.4.2 resolved issues in Jira. 4.2.1.3.3. Known issues The nodeSelector spec is not supported for the Data Mover restore action When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. OADP-5260 The S3 storage does not use proxy environment when TLS skip verify is specified In the image registry backup, the S3 storage does not use the proxy environment when the insecureSkipTLSVerify parameter is set to true . OADP-3143 Kopia does not delete artifacts after backup expiration Even after you delete a backup, Kopia does not delete the volume artifacts from the USD{bucket_name}/kopia/USDopenshift-adp on the S3 location after backup expired. For more information, see "About Kopia repository maintenance". OADP-5131 Additional resources About Kopia repository maintenance 4.2.1.4. OADP 1.4.1 release notes The OpenShift API for Data Protection (OADP) 1.4.1 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.4.1. New features New DPA fields to update client qps and burst You can now change Velero Server Kubernetes API queries per second and burst values by using the new Data Protection Application (DPA) fields. The new DPA fields are spec.configuration.velero.client-qps and spec.configuration.velero.client-burst , which both default to 100. OADP-4076 Enabling non-default algorithms with Kopia With this update, you can now configure the hash, encryption, and splitter algorithms in Kopia to select non-default options to optimize performance for different backup workloads. To configure these algorithms, set the env variable of a velero pod in the podConfig section of the DataProtectionApplication (DPA) configuration. If this variable is not set, or an unsupported algorithm is chosen, Kopia will default to its standard algorithms. OADP-4640 4.2.1.4.2. Resolved issues Restoring a backup without pods is now successful Previously, restoring a backup without pods and having StorageClass VolumeBindingMode set as WaitForFirstConsumer , resulted in the PartiallyFailed status with an error: fail to patch dynamic PV, err: context deadline exceeded . With this update, patching dynamic PV is skipped and restoring a backup is successful without any PartiallyFailed status. OADP-4231 PodVolumeBackup CR now displays correct message Previously, the PodVolumeBackup custom resource (CR) generated an incorrect message, which was: get a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed" . With this update, the message produced is now: found a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed". OADP-4224 Overriding imagePullPolicy is now possible with DPA Previously, OADP set the imagePullPolicy parameter to Always for all images. With this update, OADP checks if each image contains sha256 or sha512 digest, then it sets imagePullPolicy to IfNotPresent ; otherwise imagePullPolicy is set to Always . You can now override this policy by using the new spec.containerImagePullPolicy DPA field. OADP-4172 OADP Velero can now retry updating the restore status if initial update fails Previously, OADP Velero failed to update the restored CR status. This left the status at InProgress indefinitely. Components which relied on the backup and restore CR status to determine the completion would fail. With this update, the restore CR status for a restore correctly proceeds to the Completed or Failed status. OADP-3227 Restoring BuildConfig Build from a different cluster is successful without any errors Previously, when performing a restore of the BuildConfig Build resource from a different cluster, the application generated an error on TLS verification to the internal image registry. The resulting error was failed to verify certificate: x509: certificate signed by unknown authority error. With this update, the restore of the BuildConfig build resources to a different cluster can proceed successfully without generating the failed to verify certificate error. OADP-4692 Restoring an empty PVC is successful Previously, downloading data failed while restoring an empty persistent volume claim (PVC). It failed with the following error: data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found With this update, the downloading of data proceeds to correct conclusion when restoring an empty PVC and the error message is not generated. OADP-3106 There is no Velero memory leak in CSI and DataMover plugins Previously, a Velero memory leak was caused by using the CSI and DataMover plugins. When the backup ended, the Velero plugin instance was not deleted and the memory leak consumed memory until an Out of Memory (OOM) condition was generated in the Velero pod. With this update, there is no resulting Velero memory leak when using the CSI and DataMover plugins. OADP-4448 Post-hook operation does not start before the related PVs are released Previously, due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the Data Mover persistent volume claim (PVC) releases the persistent volumes (PVs) of the related pods. This problem would cause the backup to fail with a PartiallyFailed status. With this update, the post-hook operation is not started until the related PVs are released by the Data Mover PVC, eliminating the PartiallyFailed backup status. OADP-3140 Deploying a DPA works as expected in namespaces with more than 37 characters When you install the OADP Operator in a namespace with more than 37 characters to create a new DPA, labeling the "cloud-credentials" Secret fails and the DPA reports the following error: With this update, creating a DPA does not fail in namespaces with more than 37 characters in the name. OADP-3960 Restore is successfully completed by overriding the timeout error Previously, in a large scale environment, the restore operation would result in a Partiallyfailed status with the error: fail to patch dynamic PV, err: context deadline exceeded . With this update, the resourceTimeout Velero server argument is used to override this timeout error resulting in a successful restore. OADP-4344 For a complete list of all issues resolved in this release, see the list of OADP 1.4.1 resolved issues in Jira. 4.2.1.4.3. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning the error CrashLoopBackoff state after restoring OADP. The StatefulSet controller then recreates these pods and it runs normally. OADP-4407 Deployment referencing ImageStream is not restored properly leading to corrupted pod and volume contents During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. During the restore operation, the OpenShift Container Platform controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB along with the post-hook. For more information about image stream trigger, see Triggering updates on image stream changes . The workaround for this behavior is a two-step restore process: Perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps Once the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps OADP-3954 4.2.1.5. OADP 1.4.0 release notes The OpenShift API for Data Protection (OADP) 1.4.0 release notes lists resolved issues and known issues. 4.2.1.5.1. Resolved issues Restore works correctly in OpenShift Container Platform 4.16 Previously, while restoring the deleted application namespace, the restore operation partially failed with the resource name may not be empty error in OpenShift Container Platform 4.16. With this update, restore works as expected in OpenShift Container Platform 4.16. OADP-4075 Data Mover backups work properly in the OpenShift Container Platform 4.16 cluster Previously, Velero was using the earlier version of SDK where the Spec.SourceVolumeMode field did not exist. As a consequence, Data Mover backups failed in the OpenShift Container Platform 4.16 cluster on the external snapshotter with version 4.2. With this update, external snapshotter is upgraded to version 7.0 and later. As a result, backups do not fail in the OpenShift Container Platform 4.16 cluster. OADP-3922 For a complete list of all issues resolved in this release, see the list of OADP 1.4.0 resolved issues in Jira. 4.2.1.5.2. Known issues Backup fails when checksumAlgorithm is not set for MCG While performing a backup of any application with Noobaa as the backup location, if the checksumAlgorithm configuration parameter is not set, backup fails. To fix this problem, if you do not provide a value for checksumAlgorithm in the Backup Storage Location (BSL) configuration, an empty value is added. The empty value is only added for BSLs that are created using Data Protection Application (DPA) custom resource (CR), and this value is not added if BSLs are created using any other method. OADP-4274 For a complete list of all known issues in this release, see the list of OADP 1.4.0 known issues in Jira. 4.2.1.5.3. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.1.5.3.1. Changes from OADP 1.3 to 1.4 The Velero server has been updated from version 1.12 to 1.14. Note that there are no changes in the Data Protection Application (DPA). This changes the following: The velero-plugin-for-csi code is now available in the Velero code, which means an init container is no longer required for the plugin. Velero changed client Burst and QPS defaults from 30 and 20 to 100 and 100, respectively. The velero-plugin-for-aws plugin updated default value of the spec.config.checksumAlgorithm field in BackupStorageLocation objects (BSLs) from "" (no checksum calculation) to the CRC32 algorithm. The checksum algorithm types are known to work only with AWS. Several S3 providers require the md5sum to be disabled by setting the checksum algorithm to "" . Confirm md5sum algorithm support and configuration with your storage provider. In OADP 1.4, the default value for BSLs created within DPA for this configuration is "" . This default value means that the md5sum is not checked, which is consistent with OADP 1.3. For BSLs created within DPA, update it by using the spec.backupLocations[].velero.config.checksumAlgorithm field in the DPA. If your BSLs are created outside DPA, you can update this configuration by using spec.config.checksumAlgorithm in the BSLs. 4.2.1.5.3.2. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example command USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.1.5.3.3. Upgrading the OADP Operator Use the following procedure when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.3 to stable-1.4 . Wait for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.1.5.4. Converting DPA to the new version To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required. 4.2.1.5.5. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.3. OADP performance 4.3.1. OADP recommended network settings For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across OpenShift nodes, S3 storage, and in supported cloud environments that meet OpenShift network requirement recommendations. To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions: Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups. Packet loss: 1% Packet corruption: 1% Latency: 100ms Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements. Important Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds. 4.4. OADP features and plugins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.4.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.4.2. OADP plugins The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins. OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots. Table 4.3. OADP plugins OADP plugin Function Storage location aws Backs up and restores Kubernetes objects. AWS S3 Backs up and restores volumes with snapshots. AWS EBS azure Backs up and restores Kubernetes objects. Microsoft Azure Blob storage Backs up and restores volumes with snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects. Google Cloud Storage Backs up and restores volumes with snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources. [1] Object store kubevirt Backs up and restores OpenShift Virtualization resources. [2] Object store csi Backs up and restores volumes with CSI snapshots. [3] Cloud storage that supports CSI snapshots vsm VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] Object store Mandatory. Virtual machine disks are backed up with CSI snapshots or Restic. The csi plugin uses the Kubernetes CSI snapshot API. OADP 1.1 or later uses snapshot.storage.k8s.io/v1 OADP 1.0 uses snapshot.storage.k8s.io/v1beta1 OADP 1.2 only. 4.4.3. About OADP Velero plugins You can configure two types of plugins when you install Velero: Default cloud provider plugins Custom plugins Both types of plugin are optional, but most users configure at least one cloud provider plugin. 4.4.3.1. Default Velero cloud provider plugins You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plugin) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plugins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.4.3.2. Custom Velero plugins You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. 4.4.4. Supported architectures for OADP OpenShift API for Data Protection (OADP) supports the following architectures: AMD64 ARM64 PPC64le s390x Note OADP 1.2.0 and later versions support the ARM64 architecture. 4.4.5. OADP support for IBM Power and IBM Z OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power(R) and to IBM Z(R). OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems. OADP 1.4.4 was tested successfully against OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.4.4 in terms of backup locations for these systems. 4.4.5.1. OADP support for target backup locations using IBM Power IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.4 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2. OADP testing and support for target backup locations using IBM Z IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.4 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue. 4.4.6. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.4.6.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.4.6.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.5. OADP use cases 4.5.1. Backup using OpenShift API for Data Protection and Red Hat OpenShift Data Foundation (ODF) Following is a use case for using OADP and ODF to back up an application. 4.5.1.1. Backing up an application using OADP and ODF In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF). You create a object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location. You use the NooBaa MCG service with OADP by using the aws provider plugin. You configure the Data Protection Application (DPA) with the backup storage location (BSL). You create a backup custom resource (CR) and specify the application namespace to back up. You create and verify the backup. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example: Example OBC apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 The name of the object bucket claim. 2 The name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> 1 1 Specify the file name of the object bucket claim manifest. When you create an OBC, ODF creates a secret and a config map with the same name as the object bucket claim. The secret has the bucket credentials, and the config map has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 test-obc is the name of the OBC. Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the generated secret , run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Get the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace by running the following command: USD oc get route s3 -n openshift-storage Create a cloud-credentials file with the object bucket credentials as shown in the following command: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content as shown in the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Configure the Data Protection Application (DPA) as shown in the following example: Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp 1 Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. 2 This is the S3 URL of ODF storage. 3 Specify the bucket name. Create the DPA by running the following command: USD oc apply -f <dpa_filename> Verify that the DPA is created successfully by running the following command. In the example output, you can see the status object has type field set to Reconciled . This means, the DPA is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.2. OpenShift API for Data Protection (OADP) restore use case Following is a use case for using OADP to restore a backup to a different namespace. 4.5.2.1. Restoring an application to a different namespace using OADP Restore a backup of an application by using OADP to a new target namespace, test-restore-application . To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources. Prerequisites You installed the OADP Operator. You have the backup of an application to be restored. Procedure Create a restore CR as shown in the following example: Example restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3 1 The name of the restore CR. 2 Specify the name of the backup. 3 namespaceMapping maps the source application namespace to the target application namespace. Specify the application namespace that you backed up. test-restore-application is the target namespace where you want to restore the backup. Apply the restore CR by running the following command: USD oc apply -f <restore_cr_filename> Verification Verify that the restore is in the Completed phase by running the following command: USD oc describe restores.velero.io <restore_name> -n openshift-adp Change to the restored namespace test-restore-application by running the following command: USD oc project test-restore-application Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command: USD oc get pvc,svc,deployment,secret,configmap Example output NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s 4.5.3. Including a self-signed CA certificate during backup You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF). 4.5.3.1. Backing up an application and its self-signed CA certificate The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA. To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks: Request a NooBaa bucket by creating an object bucket claim (OBC). Extract the bucket details. Include a self-signed CA certificate in the DataProtectionApplication CR. Back up an application. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest to request a NooBaa bucket as shown in the following example: Example ObjectBucketClaim CR apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 Specifies the name of the object bucket claim. 2 Specifies the name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> When you create an OBC, ODF creates a secret and a ConfigMap with the same name as the object bucket claim. The secret object contains the bucket credentials, and the ConfigMap object contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 The name of the OBC is test-obc . Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the secret object, run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Create a cloud-credentials file with the object bucket credentials by using the following example configuration: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content by running the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Extract the service CA certificate from the openshift-service-ca.crt config map by running the following command. Ensure that you encode the certificate in Base64 format and note the value to use in the step. USD oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo Example output LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K Configure the DataProtectionApplication CR manifest file with the bucket name and CA certificate as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "false" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3 1 The insecureSkipTLSVerify flag can be set to either true or false . If set to "true", SSL/TLS security is disabled. If set to false , SSL/TLS security is enabled. 2 Specify the name of the bucket extracted in an earlier step. 3 Copy and paste the Base64 encoded certificate from the step. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command: USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure the Backup CR by using the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the Backup object is in the Completed phase by running the following command: USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.4. Using the legacy-aws Velero plugin If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups. 4.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application. Note Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins . Prerequisites You have installed the OADP Operator. You have configured an AWS S3-compatible object storage as a backup location. You have an application with a database running in a separate namespace. Procedure Configure the DataProtectionApplication CR to use the legacy-aws Velero plugin as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp 1 Use the legacy-aws plugin. 2 Specify the bucket name. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command. In the example output, you can see the status object has the type field set to Reconciled and the status field set to "True" . That status indicates that the DataProtectionApplication CR is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a Backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backups.velero.io test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.6. Installing and configuring OADP 4.6.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway IBM Cloud(R) Object Storage S3 AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO You can configure multiple backup storage locations within the same namespace for each individual OADP deployment. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB). To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage. You create a default Secret and then you install the Data Protection Application. 4.6.1.1. AWS S3 compatible backup storage providers OADP is compatible with many object storage providers for use with different backup and snapshot operations. Several object storage providers are fully supported, several are unsupported but known to work, and some have known limitations. 4.6.1.1.1. Supported backup storage providers The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations: MinIO Multicloud Object Gateway (MCG) Amazon Web Services (AWS) S3 IBM Cloud(R) Object Storage S3 Ceph RADOS Gateway (Ceph Object Gateway) Red Hat Container Storage Red Hat OpenShift Data Foundation Note The following compatible object storage providers are supported and have their own Velero object store plugins: Google Cloud Platform (GCP) Microsoft Azure 4.6.1.1.2. Unsupported backup storage providers The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat: Oracle Cloud DigitalOcean NooBaa, unless installed using Multicloud Object Gateway (MCG) Tencent Cloud Quobyte Cloudian HyperStore Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . 4.6.1.1.3. Backup storage providers with known limitations The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set: Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore. 4.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store. Warning Failure to configure MCG as an external object store might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud . Additional resources Overview of backup and snapshot locations in the Velero documentation 4.6.1.3. About OADP update channels When you install an OADP Operator, you choose an update channel . This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time. The following update channels are available: The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for OADP.v1.1.z and older versions from OADP.v1.0.z . The stable-1.0 channel is deprecated and is not supported. The stable-1.1 channel is deprecated and is not supported. The stable-1.2 channel is deprecated and is not supported. The stable-1.3 channel contains OADP.v1.3.z , the most recent OADP 1.3 ClusterServiceVersion . The stable-1.4 channel contains OADP.v1.4.z , the most recent OADP 1.4 ClusterServiceVersion . For more information, see OpenShift Operator Life Cycles . Which update channel is right for you? The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from OADP.v1.1.z . Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z. When must you switch update channels? If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z. If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z. If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z. Note You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it. 4.6.1.4. Installation of OADP on multiple namespaces You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI). You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements: All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported. Each individual deployment of OADP must have a unique set of credentials and at least one BackupStorageLocation configuration. You can also use multiple BackupStorageLocation configurations within the same namespace. By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently. Additional resources Cluster service version 4.6.1.5. Velero CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources. 4.6.1.5.1. CPU and memory requirement for configurations Configuration types [1] Average usage [2] Large usage resourceTimeouts CSI Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi N/A Restic [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi 900m [5] Data Mover N/A N/A 10m - average usage 60m - large usage Average usage - use these settings for most usage situations. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations. Increasing the CPU has a significant impact on improving backup and restore times. Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m. Note The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above. 4.6.1.5.2. NodeAgent CPU for large usage Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP). Important It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia's aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. You can set these limits in Ceph MDS pods by following the procedure in Changing the CPU and memory resources on the rook-ceph pods . You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: resources: mds: limits: cpu: "3" memory: 128Gi requests: cpu: "3" memory: 8Gi 4.6.2. Installing the OADP Operator You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.17 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.14 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.6.2.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.3.0 1.12 4.12-4.15 1.3.1 1.12 4.12-4.15 1.3.2 1.12 4.12-4.15 1.3.3 1.12 4.12-4.15 1.3.4 1.12 4.12-4.15 1.3.5 1.12 4.12-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 1.4.3 1.14 4.14-4.18 4.6.3. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details. 4.6.3.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud Amazon Simple Storage Service (Amazon S3) is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web. You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service. You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources. AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following: You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region. If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls. IIn AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions, arn:aws-us-gov . If you need to specify the US-West or US-East region, use one the following ARNs: For US-West, use us-gov-west-1 . For US-East, use us-gov-east-1 . For all other standard regions, ARNs begin with: arn:aws . In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas . If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com . To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US) . 4.6.3.2. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.6.3.3. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.3.3.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.3.3.2. Creating profiles for different credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.6.3.3.3. Configuring the backup storage location using AWS You can configure the AWS backup storage location (BSL) as shown in the following example procedure. Prerequisites You have created an object storage bucket using AWS. You have installed the OADP Operator. Procedure Configure the BSL custom resource (CR) with values as applicable to your use case. Backup storage location apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: "true" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: "50..c-4da1-419f-a16e-ei...49f" 12 customerKeyEncryptionFile: "/credentials/customer-key" 13 signatureVersion: "1" 14 profile: "default" 15 insecureSkipTLSVerify: "true" 16 enableSharedConfig: "true" 17 tagging: "" 18 checksumAlgorithm: "CRC32" 19 1 1 The name of the object store plugin. In this example, the plugin is aws . This field is required. 2 The name of the bucket in which to store backups. This field is required. 3 The prefix within the bucket in which to store backups. This field is optional. 4 The credentials for the backup storage location. You can set custom credentials. If custom credentials are not set, the default credentials' secret is used. 5 The key within the secret credentials' data. 6 The name of the secret containing the credentials. 7 The AWS region where the bucket is located. Optional if s3ForcePathStyle is false. 8 A boolean flag to decide whether to use path-style addressing instead of virtual hosted bucket addressing. Set to true if using a storage service such as MinIO or NooBaa. This is an optional field. The default value is false . 9 You can specify the AWS S3 URL here for explicitness. This field is primarily for storage services such as MinIO or NooBaa. This is an optional field. 10 This field is primarily used for storage services such as MinIO or NooBaa. This is an optional field. 11 The name of the server-side encryption algorithm to use for uploading objects, for example, AES256 . This is an optional field. 12 Specify an AWS KMS key ID. You can format, as shown in the example, as an alias, such as alias/<KMS-key-alias-name> , or the full ARN to enable encryption of the backups stored in S3. Note that kmsKeyId cannot be used in with customerKeyEncryptionFile . This is an optional field. 13 Specify the file that has the SSE-C customer key to enable customer key encryption of the backups stored in S3. The file must contain a 32-byte string. The customerKeyEncryptionFile field points to a mounted secret within the velero container. Add the following key-value pair to the velero cloud-credentials secret: customer-key: <your_b64_encoded_32byte_string> . Note that the customerKeyEncryptionFile field cannot be used with the kmsKeyId field. The default value is an empty string ( "" ), which means SSE-C is disabled. This is an optional field. 14 The version of the signature algorithm used to create signed URLs. You use signed URLs to download the backups, or fetch the logs. Valid values are 1 and 4 . The default version is 4 . This is an optional field. 15 The name of the AWS profile in the credentials file. The default value is default . This is an optional field. 16 Set the insecureSkipTLSVerify field to true if you do not want to verify the TLS certificate when connecting to the object store, for example, for self-signed certificates with MinIO. Setting to true is susceptible to man-in-the-middle attacks and is not recommended for production workloads. The default value is false . This is an optional field. 17 Set the enableSharedConfig field to true if you want to load the credentials file as a shared config file. The default value is false . This is an optional field. 18 Specify the tags to annotate the AWS S3 objects. Specify the tags in key-value pairs. The default value is an empty string ( "" ). This is an optional field. 19 Specify the checksum algorithm to use for uploading objects to S3. The supported values are: CRC32 , CRC32C , SHA1 , and SHA256 . If you set the field as an empty string ( "" ), the checksum check will be skipped. The default value is CRC32 . This is an optional field. 4.6.3.3.4. Creating an OADP SSE-C encryption key for additional data security Amazon Web Services (AWS) S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption. The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption. You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed. Warning Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key. Prerequisites To make OADP mount a secret that contains your SSE-C key to the Velero pod at /credentials , use the following default secret name for AWS: cloud-credentials , and leave at least one of the following labels empty: dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credential This is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971 . Note The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting. If you need the backup location to have credentials with a different name than cloud-credentials , you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the example does not contain a credential name, the snapshot location will use cloud-credentials as its secret for taking snapshots. Example snapshot location in a DPA without credentials specified snapshotLocations: - velero: config: profile: default region: <region> provider: aws # ... Procedure Create an SSE-C encryption key: Generate a random number and save it as a file named sse.key by running the following command: USD dd if=/dev/urandom bs=1 count=32 > sse.key Encode the sse.key by using Base64 and save the result as a file named sse_encoded.key by running the following command: USD cat sse.key | base64 > sse_encoded.key Link the file named sse_encoded.key to a new file named customer-key by running the following command: USD ln -s sse_encoded.key customer-key Create an OpenShift Container Platform secret: If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command: USD oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key If you are updating an existing installation, edit the values of the cloud-credential secret block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret # ... Edit the value of the customerKeyEncryptionFile attribute in the backupLocations block of the DataProtectionApplication CR manifest, as in the following example: spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default # ... Warning You must restart the Velero pod to remount the secret credentials properly on an existing installation. The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key. Verification To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it. Create a test file by running the following command: USD echo "encrypt me please" > test.txt Upload the test file by running the following command: USD aws s3api put-object \ --bucket <bucket> \ --key test.txt \ --body test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 Try to download the file. In either the Amazon web console or the terminal, run the following command: USD s3cmd get s3://<bucket>/test.txt test.txt The download fails because the file is encrypted with an additional key. Download the file with the additional encryption key by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ downloaded.txt Read the file contents by running the following command: USD cat downloaded.txt Example output encrypt me please Additional resources You can also download the file with the additional encryption key backed up with Velcro by running a different command. See Downloading a file with an SSE-C encryption key for files backed up by Velero . 4.6.3.3.4.1. Downloading a file with an SSE-C encryption key for files backed up by Velero When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velcro. Procedure Download the file with the additional encryption key for files backed up by Velero by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ --debug \ velero_download.tar.gz 4.6.3.4. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.3.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.3.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.3.4.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.3.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: "default" s3ForcePathStyle: "true" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: "default" credential: key: cloud name: cloud-credentials 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 9 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 10 Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage. 11 Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage. 12 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 13 Specify a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs. 14 The snapshot location must be in the same region as the PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the credentials-velero file. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.3.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.3.6. Configuring the backup storage location with a MD5 checksum algorithm You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA. CRC32 CRC32C SHA1 SHA256 Note You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32 . Prerequisites You have installed the OADP Operator. You have configured Amazon S3, or S3-compatible object storage as a backup location. Procedure Configure the BSL in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: "" 1 insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi 1 Specify the checksumAlgorithm . In this example, the checksumAlgorithm field is set to an empty value. You can select an option from the following list: CRC32 , CRC32C , SHA1 , SHA256 . Important If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration. The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method. 4.6.3.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.3.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.3.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.3.9.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.3.9.2. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . 4.6.4. Configuring the OpenShift API for Data Protection with IBM Cloud You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups. 4.6.4.1. Configuring the COS instance You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials. Prerequisites You have an IBM Cloud Platform account. You installed the IBM Cloud CLI . You are logged in to IBM Cloud. Procedure Install the IBM Cloud Object Storage (COS) plugin by running the following command: USD ibmcloud plugin install cos -f Set a bucket name by running the following command: USD BUCKET=<bucket_name> Set a bucket region by running the following command: USD REGION=<bucket_region> 1 1 Specify the bucket region, for example, eu-gb . Create a resource group by running the following command: USD ibmcloud resource group-create <resource_group_name> Set the target resource group by running the following command: USD ibmcloud target -g <resource_group_name> Verify that the target resource group is correctly set by running the following command: USD ibmcloud target Example output API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default In the example output, the resource group is set to Default . Set a resource group name by running the following command: USD RESOURCE_GROUP=<resource_group> 1 1 Specify the resource group name, for example, "default" . Create an IBM Cloud service-instance resource by running the following command: USD ibmcloud resource service-instance-create \ <service_instance_name> \ 1 <service_name> \ 2 <service_plan> \ 3 <region_name> 4 1 Specify a name for the service-instance resource. 2 Specify the service name. Alternatively, you can specify a service ID. 3 Specify the service plan for your IBM Cloud account. 4 Specify the region name. Example command USD ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ 1 standard \ global \ -d premium-global-deployment 2 1 The service name is cloud-object-storage . 2 The -d flag specifies the deployment name. Extract the service instance ID by running the following command: USD SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id') Create a COS bucket by running the following command: USD ibmcloud cos bucket-create \// --bucket USDBUCKET \// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \// --region USDREGION Variables such as USDBUCKET , USDSERVICE_INSTANCE_ID , and USDREGION are replaced by the values you set previously. Create HMAC credentials by running the following command. USD ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true} Extract the access key ID and the secret access key from the HMAC credentials and save them in the credentials-velero file. You can use the credentials-velero file to create a secret for the backup storage location. Run the following command: USD cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__ 4.6.4.2. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.4.3. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.4.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5 1 The provider is aws when you use IBM Cloud as a backup storage location. 2 Specify the IBM Cloud Object Storage (COS) bucket name. 3 Specify the COS region name, for example, eu-gb . 4 Specify the S3 URL of the COS bucket. For example, http://s3.eu-gb.cloud-object-storage.appdomain.cloud . Here, eu-gb is the region name. Replace the region name according to your bucket region. 5 Defines the name of the secret you created by using the access key and the secret access key from the HMAC credentials. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.4.5. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.6.4.6. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.4.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.4.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.4.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.4.10. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". 4.6.5. Configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Azure for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details. 4.6.5.1. Configuring Microsoft Azure You configure Microsoft Azure for OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools. This identity is used for access to resources. Create a service principal Sign in using a service principal and password Sign in using a service principal and certificate Manage service principal roles Create an Azure resource using a service principal Reset service principal credentials For more details, see Create an Azure service principal with Azure CLI . 4.6.5.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.5.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-azure . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.5.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" provider: azure 1 Backup location Secret with custom name. 4.6.5.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.5.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.5.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.5.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.5.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify the Azure resource group. 9 Specify the Azure storage account ID. 10 Specify the Azure subscription ID. 11 If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 14 You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.5.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.5.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.5.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.5.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.5.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.6. Configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure GCP for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details. 4.6.6.1. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.6.6.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.6.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-gcp . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.6.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.6.6.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.6.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.6.4. Google workload identity federation cloud authentication Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed. With Google's workload identity federation, you can use Identity and Access Management (IAM) to offer IAM roles, including the ability to impersonate service accounts, to external identities. This eliminates the maintenance and security risks associated with service account keys. Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application. Note Google workload identity federation is available for OADP 1.3.x and later. When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots. OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues . If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application . Prerequisites You have installed a cluster in manual mode with GCP Workload Identity configured . You have access to the Cloud Credential Operator utility ( ccoctl ) and to the associated workload identity pool. Procedure Create an oadp-credrequest directory by running the following command: USD mkdir -p oadp-credrequest Create a CredentialsRequest.yaml file as following: echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml Use the ccoctl utility to process the CredentialsRequest objects in the oadp-credrequest directory by running the following command: USD ccoctl gcp create-service-accounts \ --name=<name> \ --project=<gcp_project_id> \ --credentials-requests-dir=oadp-credrequest \ --workload-identity-pool=<pool_id> \ --workload-identity-provider=<provider_id> The manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml file is now available to use in the following steps. Create a namespace by running the following command: USD oc create namespace <OPERATOR_INSTALL_NS> Apply the credentials to the namespace by running the following command: USD oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml 4.6.6.4.1. Google workload identity federation known issues Volume Snapshot Location (VSL) backups finish with a PartiallyFailed phase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups. 4.6.6.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Secret key that contains credentials. For Google workload identity federation cloud authentication use service_account.json . 9 Secret name that contains credentials. If you do not specify this value, the default name, cloud-credentials-gcp , is used. 10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 11 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 12 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 13 The snapshot location must be in the same region as the PVs. 14 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. 15 Google workload identity federation supports internal image backup. Set this field to false if you do not want to use image backup. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.6.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.6.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.6.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.6.7.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.6.7.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.7. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Multicloud Object Gateway as a backup location. MCG is a component of OpenShift Data Foundation. You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager in disconnected environments . 4.6.7.1. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.6.7.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.7.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.7.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: profile: "default" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Specify the region, following the naming convention of the documentation of your object storage server. 2 Backup location Secret with custom name. 4.6.7.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.7.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.7.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.7.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.7.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: "default" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 The openshift plugin is mandatory. 4 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 5 The administrative agent that routes the administrative requests to servers. 6 Set this value to true if you want to enable nodeAgent and perform File System Backup. 7 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 8 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 9 Specify the region, following the naming convention of the documentation of your object storage server. 10 Specify the URL of the S3 endpoint. 11 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.7.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.7.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.7.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.7.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.7.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Performance tuning guide for Multicloud Object Gateway . Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.8. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location. Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager in disconnected environments . 4.6.8.1. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. Additional resources Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.1.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials , unless your backup storage provider has a default plugin, such as aws , azure , or gcp . In that case, the default name is specified in the provider-specific OADP installation procedure. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.8.1.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.8.2. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.8.2.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.8.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to Red Hat OpenShift Data Foundation (ODF). If working with ODF, consult the appropriate tuning guides for official recommendations. 4.6.8.2.1.1.1. CPU and memory requirement for configurations Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested: Configuration types Request Max limit CPU Request changed to 3 Max limit to 3 Memory Request changed to 8 Gi Max limit to 128 Gi 4.6.8.2.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.8.2.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.8.3. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 Optional: The kubevirt plugin is used with OpenShift Virtualization. 4 Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.8.4. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.8.5. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.8.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.8.5.2. Creating an Object Bucket Claim for disaster recovery on OpenShift Data Foundation If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console. Warning Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.5.3. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.8.5.4. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.9. Configuring the OpenShift API for Data Protection with OpenShift Virtualization You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application. Back up and restore virtual machines by using the OpenShift API for Data Protection . Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backups and restores For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details. 4.6.9.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager in disconnected environments 4.6.9.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Warning If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots. 4.6.9.3. Backing up a single VM If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label. Prerequisites You have installed the OADP Operator. You have multiple VMs running in a namespace. You have added the kubevirt plugin in the DataProtectionApplication (DPA) custom resource (CR). You have configured the BackupStorageLocation CR in the DataProtectionApplication CR and BackupStorageLocation is available. Procedure Configure the Backup CR as shown in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3 1 Specify the name of the namespace where you have created the VMs. 2 Specify the VM name that needs to be backed up. 3 Specify the name of the BackupStorageLocation CR. To create a Backup CR, run the following command: USD oc apply -f <backup_cr_file_name> 1 1 Specify the name of the Backup CR file. 4.6.9.4. Restoring a single VM After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM. Prerequisites You have installed the OADP Operator. You have backed up a single VM by using the label selector. Procedure Configure the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true 1 Specifies the name of the backup of a single VM. To restore the single VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.5. Restoring a single VM from a backup of multiple VMs If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM. Prerequisites You have installed the OADP Operator. You have labeled the VMs that need to be backed up. You have a backup of multiple VMs. Procedure Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command: USD oc label vm <vm_name> app=<vm_name> -n openshift-adp Configure the label selectors in the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2 1 Specify the UID of DataVolume of the VM that you want to restore. For example, b6... 53a-ddd7-4d9d-9407-a0c... e5 . 2 Specify the name of the VM that you want to restore. For example, test-vm . To restore a VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.9.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.9.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.9.8. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.4. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.5. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . Important Red Hat only supports the combination of OADP versions 1.3.0 and later, and OpenShift Virtualization versions 4.14 and later. OADP versions before 1.3.0 are not supported for back up and restore of OpenShift Virtualization. 4.6.10. Configuring the OpenShift API for Data Protection (OADP) with more than one Backup Storage Location You can configure one or more backup storage locations (BSLs) in the Data Protection Application (DPA). You can also select the location to store the backup in when you create the backup. With this configuration, you can store your backups in the following ways: To different regions To a different storage provider OADP supports multiple credentials for configuring more than one BSL, so that you can specify the credentials to use with any BSL. 4.6.10.1. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.10.2. OADP use case for two BSLs In this use case, you configure the DPA with two storage locations by using two cloud credentials. You back up an application with a database by using the default BSL. OADP stores the backup resources in the default BSL. You then backup the application again by using the second BSL. Prerequisites You must install the OADP Operator. You must configure two backup storage locations: AWS S3 and Multicloud Object Gateway (MCG). You must have an application with a database deployed on a Red Hat OpenShift cluster. Procedure Create the first Secret for the AWS S3 storage provider with the default name by running the following command: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1 1 Specify the name of the cloud credentials file for AWS S3. Create the second Secret for MCG with a custom name by running the following command: USD oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1 1 Specify the name of the cloud credentials file for MCG. Note the name of the mcg-secret custom secret. Configure the DPA with the two BSLs as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: "true" profile: noobaa region: <region_name> 3 s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws 1 Specify the AWS region for the bucket. 2 Specify the AWS S3 bucket name. 3 Specify the region, following the naming convention of the documentation of MCG. 4 Specify the URL of the S3 endpoint for MCG. 5 Specify the name of the custom secret for MCG storage. 6 Specify the MCG bucket name. Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Verify that the BSLs are available by running the following command: USD oc get bsl Example output NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s Create a backup CR with the default BSL. Note In the following example, the storageLocation field is not specified in the backup CR. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the default BSL by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Create a backup CR by using MCG as the BSL. In the following example, note that the second storageLocation value is specified at the time of backup CR creation. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. 2 Specify the second storage location. Create a second backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the storage location as MCG by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Additional resources Creating profiles for different credentials 4.6.11. Configuring the OpenShift API for Data Protection (OADP) with more than one Volume Snapshot Location You can configure one or more Volume Snapshot Locations (VSLs) to store the snapshots in different cloud provider regions. 4.6.11.1. Configuring the DPA with more than one VSL You configure the DPA with more than one VSL and specify the credentials provided by the cloud provider. Make sure that you configure the snapshot location in the same region as the persistent volumes. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #... 1 Specify the region. The snapshot location must be in the same region as the persistent volumes. 2 Specify the custom credential name. 4.7. Uninstalling OADP 4.7.1. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.8. OADP backing up 4.8.1. Backing up applications Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR . The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots . For more information about CSI volume snapshots, see CSI volume snapshots . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Kopia or Restic. See Backing up applications with File System Backup: Kopia or Restic . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Important The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. 4.8.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks . You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups using Schedule CR . 4.8.1.2. Known issues OpenShift Container Platform 4.17 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases. For more information, see Restic restore partially failing on OCP 4.15 due to changed PSA policy . Additional resources Installing Operators on clusters for administrators Installing Operators in namespaces for non-administrators 4.8.2. Creating a Backup CR You back up Kubernetes resources, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 6 - matchLabels: app: <label_1> app: <label_2> app: <label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 Map of {key,value} pairs of backup resources that have all the specified labels. 6 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.8.3. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots . For more information, see Creating a Backup CR . Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: Example configuration file apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3 1 Must be set to true . 2 If you are restoring this volume in another cluster with the same driver, make sure that you set the snapshot.storage.kubernetes.io/is-default-class parameter to false instead of setting it to true . Otherwise, the restore will partially fail. 3 OADP supports the Retain and Delete deletion policy types for CSI and Data Mover backup and restore. steps You can now create a Backup CR. 4.8.4. Backing up applications with File System Backup: Kopia or Restic You can use OADP to back up and restore Kubernetes volumes attached to pods from the file system of the volumes. This process is called File System Backup (FSB) or Pod Volume Backup (PVB). It is accomplished by using modules from the open source backup tools Restic or Kopia. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using FSB. Note Restic is installed by the OADP Operator by default. If you prefer, you can install Kopia instead. FSB integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volumes. This integration is an additional capability of OADP and is not a replacement for existing functionality. You back up Kubernetes resources, internal images, and persistent volumes with Kopia or Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Note In OADP version 1.3 and later, you can use either Kopia or Restic for backing up applications. For the Built-in DataMover, you must use Kopia. In OADP version 1.2 and earlier, you can only use Restic for backing up applications. Important FSB does not support backing up hostPath volumes. For more information, see FSB limitations . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default nodeAgent installation by setting spec.configuration.nodeAgent.enable to false in the DataProtectionApplication CR. You must select Kopia or Restic as the uploader by setting spec.configuration.nodeAgent.uploaderType to kopia or restic in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Create the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1 ... 1 In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true setting within the spec block. In OADP version 1.1, add defaultVolumesToRestic: true . 4.8.5. Creating backup hooks When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up. The commands can be configured to performed before any custom action processing ( Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up ( Post hooks). You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all objects. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entry point for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.8.6. Scheduling backups using Schedule CR The schedule operation allows you to create a backup of your data at a particular time, specified by a Cron expression. You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s EOF 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. Note To schedule a backup at specific intervals, enter the <duration_in_minutes> in the following format: schedule: "*/10 * * * *" Enter the minutes value between quotation marks ( " " ). 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true key-value pair to your configuration when performing backups of volumes with Restic. In OADP version 1.1, add the defaultVolumesToRestic: true key-value pair when you back up volumes with Restic. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.8.7. Deleting backups You can delete a backup by creating the DeleteBackupRequest custom resource (CR) or by running the velero backup delete command as explained in the following procedures. The volume backup artifacts are deleted at different times depending on the backup method: Restic: The artifacts are deleted in the full maintenance cycle, after the backup is deleted. Container Storage Interface (CSI): The artifacts are deleted immediately when the backup is deleted. Kopia: The artifacts are deleted after three full maintenance cycles of the Kopia repository, after the backup is deleted. 4.8.7.1. Deleting a backup by creating a DeleteBackupRequest CR You can delete a backup by creating a DeleteBackupRequest custom resource (CR). Prerequisites You have run a backup of your application. Procedure Create a DeleteBackupRequest CR manifest file: apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1 1 Specify the name of the backup. Apply the DeleteBackupRequest CR to delete the backup: USD oc apply -f <deletebackuprequest_cr_filename> 4.8.7.2. Deleting a backup by using the Velero CLI You can delete a backup by using the Velero CLI. Prerequisites You have run a backup of your application. You downloaded the Velero CLI and can access the Velero binary in your cluster. Procedure To delete the backup, run the following Velero command: USD velero backup delete <backup_name> -n openshift-adp 1 1 Specify the name of the backup. 4.8.7.3. About Kopia repository maintenance There are two types of Kopia repository maintenance: Quick maintenance Runs every hour to keep the number of index blobs (n) low. A high number of indexes negatively affects the performance of Kopia operations. Does not delete any metadata from the repository without ensuring that another copy of the same metadata exists. Full maintenance Runs every 24 hours to perform garbage collection of repository contents that are no longer needed. snapshot-gc , a full maintenance task, finds all files and directory listings that are no longer accessible from snapshot manifests and marks them as deleted. A full maintenance is a resource-costly operation, as it requires scanning all directories in all snapshots that are active in the cluster. 4.8.7.3.1. Kopia maintenance in OADP The repo-maintain-job jobs are executed in the namespace where OADP is installed, as shown in the following example: pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m You can check the logs of the repo-maintain-job for more details about the cleanup and the removal of artifacts in the backup object storage. You can find a note, as shown in the following example, in the repo-maintain-job when the full cycle maintenance is due: not due for full maintenance cycle until 2024-00-00 18:29:4 Important Three successful executions of a full maintenance cycle are required for the objects to be deleted from the backup object storage. This means you can expect up to 72 hours for all the artifacts in the backup object storage to be deleted. 4.8.7.4. Deleting a backup repository After you delete the backup, and after the Kopia repository maintenance cycles to delete the related artifacts are complete, the backup is no longer referenced by any metadata or manifest objects. You can then delete the backuprepository custom resource (CR) to complete the backup deletion process. Prerequisites You have deleted the backup of your application. You have waited up to 72 hours after the backup is deleted. This time frame allows Kopia to run the repository maintenance cycles. Procedure To get the name of the backup repository CR for a backup, run the following command: USD oc get backuprepositories.velero.io -n openshift-adp To delete the backup repository CR, run the following command: USD oc delete backuprepository <backup_repository_name> -n openshift-adp 1 1 Specify the name of the backup repository from the earlier step. 4.8.8. About Kopia Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice. Kopia supports network and local storage locations, and many cloud or remote storage locations, including: Amazon S3 and any cloud storage that is compatible with S3 Azure Blob Storage Google Cloud Storage platform Kopia uses content-addressable storage for snapshots: Snapshots are always incremental; data that is already included in snapshots is not re-uploaded to the repository. A file is only uploaded to the repository again if it is modified. Stored data is deduplicated; if multiple copies of the same file exist, only one of them is stored. If files are moved or renamed, Kopia can recognize that they have the same content and does not upload them again. 4.8.8.1. OADP integration with Kopia OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia . If you do not specify an uploaderType , OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository. The following example shows a DataProtectionApplication CR configured for using Kopia: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... 4.9. OADP restoring 4.9.1. Restoring applications You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR . You can create restore hooks to run commands in a container in a pod by editing the Restore CR. See Creating restore hooks . 4.9.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. 4.9.1.2. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you restore DeploymentConfig with volumes or if you use post-restore hooks, run the dc-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh -> dc-post-restore.sh Note During the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow the restore and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales any DeploymentConfig objects back up to the appropriate number of replicas. Example 4.1. dc-restic-post-restore.sh dc-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo "restore: USD1" label=USD(label_name USD1) echo "label: USDlabel" echo Deleting disconnected restore pods oc delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done Note When you restore a stateful application that uses the azurefile-csi storage class, the restore operation remains in the Finalizing phase. 4.9.1.3. Creating restore hooks You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum length of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . Important During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. This happens because, during the restore operation, OpenShift controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB and the post restore hook. For more information about image stream trigger, see "Triggering updates on image stream changes". The workaround for this behavior is a two-step restore process: First, perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps After the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps Additional resources Triggering updates on image stream changes 4.10. OADP and ROSA 4.10.1. Backing up applications on ROSA clusters using OADP You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to back up and restore application data. ROSA is a fully-managed, turnkey application platform that allows you to deliver value to your customers by building and deploying applications. ROSA provides seamless integration with a wide range of Amazon Web Services (AWS) compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivery of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After you create your clusters, you can operate your clusters with the OpenShift Container Platform web console or through Red Hat OpenShift Cluster Manager . You can also use ROSA with OpenShift APIs and command-line interface (CLI) tools. For additional information about ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough . Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials Install the OADP Operator and give it an IAM role 4.10.1.1. Preparing AWS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Procedure Create the following environment variables by running the following commands: Important Change the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing. USD export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} echo "Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" 1 Replace my-cluster with your ROSA cluster name. On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following command: USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) 1 1 Replace RosaOadp with your policy name. Enter the following command to create the policy JSON file and then create the policy in ROSA: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name "RosaOadpVer1" \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \ --output text) fi 1 SCRATCH is a name for a temporary directory created for the environment variables. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create the role by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} \ Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} \ Key=rosa_role_prefix,Value=ManagedOpenShift \ Key=operator_namespace,Value=openshift-adp \ Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" \ --policy-arn USD{POLICY_ARN} 4.10.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. OpenShift Container Platform (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on ROSA with AWS STS. Important Restic is unsupported. Kopia file system backup (FSB) is supported when backing up file systems that do not have Container Storage Interface (CSI) snapshotting support. Example file systems include the following: Amazon Elastic File System (EFS) Network File System (NFS) emptyDir volumes Local volumes For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an Amazon ROSA cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform ROSA cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF 1 Replace <aws_region> with the AWS region to use for the STS endpoint. Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.15 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important The enable parameter of restic is set to false in this configuration, because OADP does not support Restic in ROSA environments. If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. 4.10.1.3. Updating the IAM role ARN in the OADP Operator subscription While installing the OADP Operator on a ROSA Security Token Service (STS) cluster, if you provide an incorrect IAM role Amazon Resource Name (ARN), the openshift-adp-controller pod gives an error. The credential requests that are generated contain the wrong IAM role ARN. To update the credential requests object with the correct IAM role ARN, you can edit the OADP Operator subscription and patch the IAM role ARN with the correct value. By editing the OADP Operator subscription, you do not have to uninstall and reinstall OADP to update the IAM role ARN. Prerequisites You have a Red Hat OpenShift Service on AWS STS cluster with the required access and tokens. You have installed OADP on the ROSA STS cluster. Procedure To verify that the OADP subscription has the wrong IAM role ARN environment variable set, run the following command: USD oc get sub -o yaml redhat-oadp-operator Example subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: "2025-01-15T07:18:31Z" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: "" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: "77363" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2 1 Verify the value of ROLEARN you want to update. Update the ROLEARN field of the subscription with the correct role ARN by running the following command: USD oc patch subscription redhat-oadp-operator -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "<role_arn>"}]}}}' --type='merge' where: <role_arn> Specifies the IAM role ARN to be updated. For example, arn:aws:iam::160... ..6956:role/oadprosa... ..8wlf . Verify that the secret object is updated with correct role ARN value by running the following command: USD oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d Example output [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token Configure the DataProtectionApplication custom resource (CR) manifest file as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift 1 Specify the CloudStorage CR. Create the DataProtectionApplication CR by running the following command: USD oc create -f <dpa_manifest_file> Verify that the DataProtectionApplication CR is reconciled and the status is set to "True" by running the following command: USD oc get dpa -n openshift-adp -o yaml Example DataProtectionApplication apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... status: conditions: - lastTransitionTime: "2023-07-31T04:48:12Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled Verify that the BackupStorageLocation CR is in an available state by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example BackupStorageLocation NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true Additional resources Installing from OperatorHub using the web console . Backing up applications 4.10.1.4. Example: Backing up workload on OADP ROSA STS, with an optional cleanup 4.10.1.4.1. Performing a backup with OADP and ROSA STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) STS. Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup is completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.10.1.4.2. Cleaning up a cluster after a backup with OADP and ROSA STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Warning If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3 run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.11. OADP and AWS STS 4.11.1. Backing up applications on AWS STS using OADP You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details. You can install OADP on an AWS Security Token Service (STS) (AWS STS) cluster manually. Amazon AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. You use STS to provide trusted users with temporary access to resources via API calls, your AWS console, or the AWS command line interface (CLI). Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials. Install the OADP Operator and give it an IAM role. 4.11.1.1. Preparing AWS STS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Prepare the AWS credentials by using the following procedure. Procedure Define the cluster_name environment variable by running the following command: USD export CLUSTER_NAME= <AWS_cluster_name> 1 1 The variable can be set to any value. Retrieve all of the details of the cluster such as the AWS_ACCOUNT_ID, OIDC_ENDPOINT by running the following command: USD export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" Create a temporary directory to store all of the files by running the following command: USD export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} Display all of the gathered details by running the following command: USD echo "Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following commands: USD export POLICY_NAME="OadpVer1" 1 1 The variable can be set to any value. USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}" --output text) Enter the following command to create the policy JSON file and then create the policy: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp \ --output text) 1 fi 1 SCRATCH is a name for a temporary directory created for storing the files. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create an IAM role trust policy for the cluster by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn USD{POLICY_ARN} 4.11.1.1.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.11.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. This document describes how to install OpenShift API for Data Protection (OADP) on an AWS STS cluster manually. Important Restic and Kopia are not supported in the OADP AWS STS environment. Verify that the Restic and Kopia node agent is disabled. For backing up volumes, OADP on AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an AWS cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in AWS STS clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform AWS STS cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF 1 Set this field to false if you do not want to use image backup. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. Additional resources Installing from OperatorHub using the web console Backing up applications 4.11.1.3. Backing up workload on OADP AWS STS, with an optional cleanup 4.11.1.3.1. Performing a backup with OADP and AWS STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) (AWS STS). Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup has completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.11.1.3.2. Cleaning up a cluster after a backup with OADP and AWS STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Important If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator by running the following command: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3, run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.12. OADP and 3scale 4.12.1. Backing up and restoring 3scale by using OADP With Red Hat 3scale API Management (APIM), you can manage your APIs for internal or external users. Share, secure, distribute, control, and monetize your APIs on an infrastructure platform built with performance, customer control, and future growth in mind. You can deploy 3scale components on-premise, in the cloud, as a managed service, or in any combination based on your requirement. Note In this example, the non-service affecting approach is used to back up and restore 3scale on-cluster storage by using the OpenShift API for Data Protection (OADP) Operator. Additionally, ensure that you are restoring 3scale on the same cluster where it was backed up from. If you want to restore 3scale on a different cluster, ensure that both clusters are using the same custom domain. Prerequisites You installed and configured Red Hat 3scale. For more information, see Red Hat 3scale API Management . 4.12.1.1. Creating the Data Protection Application You can create a Data Protection Application (DPA) custom resource (CR) for 3scale. For more information on DPA, see "Installing the Data Protection Application". Procedure Create a YAML file with the following configuration: Example dpa.yaml file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: "default" s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 1 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 2 Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes. 3 Specify a region for backup storage location. 4 Specify the URL of the object store that you are using to store backups. Create the DPA CR by running the following command: USD oc create -f dpa.yaml steps Back up the 3scale Operator. Additional resources Installing the Data Protection Application 4.12.1.2. Backing up the 3scale Operator You can back up the Operator resources, and Secret and APIManager custom resources (CR). For more information, see "Creating a Backup CR". Prerequisites You created the Data Protection Application (DPA). Procedure Back up the Operator resources, such as operatorgroup , namespaces , and subscriptions , by creating a YAML file with the following configuration: Example backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s 1 Namespace where the 3scale Operator is installed. Note You can also back up and restore ReplicationControllers , Deployment , and Pod objects to ensure that all manually set environments are backed up and restored. This does not affect the flow of restoration. Create a backup CR by running the following command: USD oc create -f backup.yaml Back up the Secret CR by creating a YAML file with the following configuration: Example backup-secret.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Create the Secret CR by running the following command: USD oc create -f backup-secret.yaml Back up the APIManager CR by creating a YAML file with the following configuration: Example backup-apimanager.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1 Create the APIManager CR by running the following command: USD oc create -f backup-apimanager.yaml steps Back up the mysql database. Additional resources Creating a Backup CR 4.12.1.3. Backing up the mysql database You can back up the mysql database by creating and attaching a persistent volume claim (PVC) to include the dumped data in the specified path. Prerequisites You have backed up the 3scale operator. Procedure Create a YAML file with the following configuration for adding an additional PVC: Example ts_pvc.yaml file kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem Create the additional PVC by running the following command: USD oc create -f ts_pvc.yml Attach the PVC to the system database pod by editing the system database deployment to use the mysql dump: USD oc edit deployment system-mysql -n threescale volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra ... serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1 ... 1 The PVC that contains the dumped data. Create a YAML file with following configuration to back up the mysql database: Example mysql.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s 1 A directory where the data is backed up. 2 Resources to back up. Back up the mysql database by running the following command: USD oc create -f mysql.yaml Verification Verify that the mysql backup is completed by running the following command: USD oc get backups.velero.io mysql-backup Example output NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s steps Back up the back-end Redis database. 4.12.1.4. Backing up the back-end Redis database You can back up the Redis database by adding the required annotations and by listing which resources to back up using the includedResources parameter. Prerequisites You backed up the 3scale Operator. You backed up the mysql database. The Redis queues have been drained before performing the backup. Procedure Edit the annotations on the backend-redis deployment by running the following command: USD oc edit deployment backend-redis -n threescale Add the following annotations: annotations: post.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 100"] pre.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 0"] Create a YAML file with the following configuration to back up the Redis database: Example redis-backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Back up the Redis database by running the following command: USD oc get backups.velero.io redis-backup -o yaml Verification Verify that the Redis backup is completed by running the following command:: USD oc get backups.velero.io steps Restore the Secrets and APIManager CRs. 4.12.1.5. Restoring the secrets and APIManager You can restore the Secrets and APIManager by using the following procedure. Prerequisites You backed up the 3scale Operator. You backed up mysql and Redis databases. You are restoring the database on the same cluster, where it was backed up. If it is on a different cluster, install and configure OADP with nodeAgent enabled on the destination cluster as it was on the source cluster. Procedure Delete the 3scale Operator custom resource definitions (CRDs) along with the threescale namespace by running the following command: USD oc delete project threescale Example output "threescale" project deleted successfully Create a YAML file with the following configuration to restore the 3scale Operator: Example restore.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the 3scale Operator by running the following command: USD oc create -f restore.yaml Manually create the s3-credentials Secret object by running the following command: USD oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF 1 Replace <ID_123456> with your AWS credentials ID. 2 Replace <ID_98765544> with your AWS credentials KEY. 3 Replace <mybucket.example.com> with your target bucket name. 4 Replace <us-east-1> with the AWS region of your bucket. Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Create a YAML file with the following configuration to restore the Secrets: Example restore-secret.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the Secrets by running the following command: USD oc create -f restore-secrets.yaml Create a YAML file with the following configuration to restore APIManager: Example restore-apimanager.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s 1 The resources that you do not want to restore. Restore the APIManager by running the following command: USD oc create -f restore-apimanager.yaml Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale steps Restore the mysql database. 4.12.1.6. Restoring the mysql database Restoring the mysql database re-creates the following resources: The Pod , ReplicationController , and Deployment objects. The additional persistent volumes (PVs) and associated persistent volume claims (PVCs). The mysql dump, which the example-claim PVC contains. Warning Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted. Prerequisites You restored the Secret and APIManager custom resources (CR). Procedure Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Example output: deployment.apps/threescale-operator-controller-manager-v2 scaled Create the following script to scale down the 3scale operator: USD vi ./scaledowndeployment.sh Example output: for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done Scale down all the deployment 3scale components by running the following script: USD ./scaledowndeployment.sh Example output: deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled Delete the system-mysql Deployment object by running the following command: USD oc delete deployment system-mysql -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "system-mysql" deleted Create the following YAML file to restore the mysql database: Example restore-mysql.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true 1 A path where the data is restored from. Restore the mysql database by running the following command: USD oc create -f restore-mysql.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m Verify that the additional PVC has been restored by running the following command: USD oc get pvc -n threescale Example output: NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m steps Restore the back-end Redis database. 4.12.1.7. Restoring the back-end Redis database You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore. Prerequisites You restored the Secret and APIManager custom resources. You restored the mysql database. Procedure Delete the backend-redis deployment by running the following command: USD oc delete deployment backend-redis -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "backend-redis" deleted Create a YAML file with the following configuration to restore the Redis database: Example restore-backend.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true Restore the Redis database by running the following command: USD oc create -f restore-backend.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m steps Scale the 3scale Operator and deployment. 4.12.1.8. Scaling up the 3scale Operator and deployment You can scale up the 3scale Operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state. Prerequisites Ensure that there are no scaled up deployments or no extra pods running. There might be some system-mysql or backend-redis pods running detached from deployments after restoration, which can be removed after the restoration is successful. Procedure Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale Ensure that the 3scale Operator was deployed by running the following command: USD oc get deployment -n threescale Scale up the deployments by executing the following script: USD ./scaledeployment.sh Get the 3scale-admin route to log in to the 3scale UI by running the following command: USD oc get routes -n threescale Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None In this example, 3scale-admin.apps.custom-cluster-name.openshift.com is the 3scale-admin URL. Use the URL from this output to log in to the 3scale Operator as an administrator. You can verify that the existing data is available before trying to create a backup. 4.13. OADP Data Mover 4.13.1. About the OADP Data Mover OpenShift API for Data Protection (OADP) includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository. OADP supports CSI snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API 4.13.1.1. Data Mover support The OADP built-in Data Mover, which was introduced in OADP 1.3 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. Supported The Data Mover backups taken with OADP 1.3 can be restored using OADP 1.3, 1.4, and later. This is supported. Not supported Backups taken with OADP 1.1 or OADP 1.2 using the Data Mover feature cannot be restored using OADP 1.3 and later. Therefore, it is not supported. OADP 1.1 and OADP 1.2 are no longer supported. The DataMover feature in OADP 1.1 or OADP 1.2 was a Technology Preview and was never supported. DataMover backups taken with OADP 1.1 or OADP 1.2 cannot be restored on later versions of OADP. 4.13.1.2. Enabling the built-in Data Mover To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository. Example DataProtectionApplication manifest apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI # ... 1 The flag to enable the node agent. 2 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. 3 The CSI plugin included in the list of default plugins. 4 In OADP 1.3.1 and later, set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 4.13.1.3. Built-in Data Mover controller and custom resource definitions (CRDs) The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore: DataDownload : Represents a data download of a volume snapshot. The CSI plugin creates one DataDownload object per volume to be restored. The DataDownload CR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. DataUpload : Represents a data upload of a volume snapshot. The CSI plugin creates one DataUpload object per CSI snapshot. The DataUpload CR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. BackupRepository : Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested. 4.13.1.4. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.6. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.7. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . 4.13.2. Backing up and restoring CSI snapshots data movement You can back up and restore persistent volumes by using the OADP 1.3 Data Mover. 4.13.2.1. Backing up persistent volumes with CSI snapshots You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. Prerequisites You have access to the cluster with the cluster-admin role. You have installed the OADP Operator. You have included the CSI plugin and enabled the node agent in the DataProtectionApplication custom resource (CR). You have an application with persistent volumes running in a separate namespace. You have added the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR. Procedure Create a YAML file for the Backup object, as in the following example: Example Backup CR kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s volumeSnapshotLocations: - dpa-sample-1 # ... 1 Set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 2 Set to true to enable movement of CSI snapshots to remote object storage. Note If you format the volume by using XFS filesystem and the volume is at 100% capacity, the backup fails with a no space left on device error. For example: Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on device In this scenario, consider resizing the volume or using a different filesystem type, for example, ext4 , so that the backup completes successfully. Apply the manifest: USD oc create -f backup.yaml A DataUpload CR is created after the snapshot creation is complete. Verification Verify that the snapshot data is successfully transferred to the remote object store by monitoring the status.phase field of the DataUpload CR. Possible values are In Progress , Completed , Failed , or Canceled . The object store is configured in the backupLocations stanza of the DataProtectionApplication CR. Run the following command to get a list of all DataUpload objects: USD oc get datauploads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal Check the value of the status.phase field of the specific DataUpload object by running the following command: USD oc get datauploads <dataupload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: "" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: "2023-11-02T16:57:02Z" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: "2023-11-02T16:56:22Z" 1 Indicates that snapshot data is successfully transferred to the remote object store. 4.13.2.2. Restoring CSI volume snapshots You can restore a volume snapshot by creating a Restore CR. Note You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. Prerequisites You have access to the cluster with the cluster-admin role. You have an OADP Backup CR from which to restore the data. Procedure Create a YAML file for the Restore CR, as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup> # ... Apply the manifest: USD oc create -f restore.yaml A DataDownload CR is created when the restore starts. Verification You can monitor the status of the restore process by checking the status.phase field of the DataDownload CR. Possible values are In Progress , Completed , Failed , or Canceled . To get a list of all DataDownload objects, run the following command: USD oc get datadownloads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal Enter the following command to check the value of the status.phase field of the specific DataDownload object: USD oc get datadownloads <datadownload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: "" pvc: mysql status: completionTimestamp: "2023-11-02T17:01:24Z" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: "2023-11-02T17:00:52Z" 1 Indicates that the CSI snapshot data is successfully restored. 4.13.2.3. Deletion policy for OADP 1.3 The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information. 4.13.2.3.1. Deletion policy guidelines for OADP 1.3 Review the following deletion policy guidelines for the OADP 1.3: In OADP 1.3.x, when using any type of backup and restore methods, you can set the deletionPolicy field to Retain or Delete in the VolumeSnapshotClass custom resource (CR). 4.13.3. Overriding Kopia hashing, encryption, and splitter algorithms You can override the default values of Kopia hashing, encryption, and splitter algorithms by using specific environment variables in the Data Protection Application (DPA). 4.13.3.1. Configuring the DPA to override Kopia hashing, encryption, and splitter algorithms You can use an OpenShift API for Data Protection (OADP) option to override the default Kopia algorithms for hashing, encryption, and splitter to improve Kopia performance or to compare performance metrics. You can set the following environment variables in the spec.configuration.velero.podConfig.env section of the DPA: KOPIA_HASHING_ALGORITHM KOPIA_ENCRYPTION_ALGORITHM KOPIA_SPLITTER_ALGORITHM Prerequisites You have installed the OADP Operator. You have created the secret by using the credentials provided by the cloud provider. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the DPA with the environment variables for hashing, encryption, and splitter as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6 1 Enable the nodeAgent . 2 Specify the uploaderType as kopia . 3 Include the csi plugin. 4 Specify a hashing algorithm. For example, BLAKE3-256 . 5 Specify an encryption algorithm. For example, CHACHA20-POLY1305-HMAC-SHA256 . 6 Specify a splitter algorithm. For example, DYNAMIC-8M-RABINKARP . 4.13.3.2. Use case for overriding Kopia hashing, encryption, and splitter algorithms The use case example demonstrates taking a backup of an application by using Kopia environment variables for hashing, encryption, and splitter. You store the backup in an AWS S3 bucket. You then verify the environment variables by connecting to the Kopia repository. Prerequisites You have installed the OADP Operator. You have an AWS S3 bucket configured as the backup storage location. You have created the secret by using the credentials provided by the cloud provider. You have installed the Kopia client. You have an application with persistent volumes running in a separate namespace. Procedure Configure the Data Protection Application (DPA) as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8 1 Specify a name for the DPA. 2 Specify the region for the backup storage location. 3 Specify the name of the default Secret object. 4 Specify the AWS S3 bucket name. 5 Include the csi plugin. 6 Specify the hashing algorithm as BLAKE3-256 . 7 Specify the encryption algorithm as CHACHA20-POLY1305-HMAC-SHA256 . 8 Specify the splitter algorithm as DYNAMIC-8M-RABINKARP . Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Create a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Verification Connect to the Kopia repository by running the following command: USD kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<aws_s3_access_key>" \ 4 --secret-access-key="<aws_s3_secret_access_key>" \ 5 1 Specify the AWS S3 bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the AWS S3 access key. 5 Specify the AWS S3 storage provider secret access key. Note If you are using a storage provider other than AWS S3, you will need to add --endpoint , the bucket endpoint URL parameter, to the command. Verify that Kopia uses the environment variables that are configured in the DPA for the backup by running the following command: USD kopia repository status Example output Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> # ... Storage type: s3 Storage capacity: unbounded Storage config: { "bucket": <bucket_name>, "prefix": "velero/kopia/<application_namespace>/", "endpoint": "s3.amazonaws.com", "accessKeyID": <access_key>, "secretAccessKey": "****************************************", "sessionToken": "" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3 # ... 4.13.3.3. Benchmarking Kopia hashing, encryption, and splitter algorithms You can run Kopia commands to benchmark the hashing, encryption, and splitter algorithms. Based on the benchmarking results, you can select the most suitable algorithm for your workload. In this procedure, you run the Kopia benchmarking commands from a pod on the cluster. The benchmarking results can vary depending on CPU speed, available RAM, disk speed, current I/O load, and so on. Prerequisites You have installed the OADP Operator. You have an application with persistent volumes running in a separate namespace. You have run a backup of the application with Container Storage Interface (CSI) snapshots. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the must-gather pod as shown in the following example. Make sure you are using the oadp-mustgather image for OADP version 1.3 and later. Example pod configuration apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: ["sleep"] args: ["infinity"] Note The Kopia client is available in the oadp-mustgather image. Create the pod by running the following command: USD oc apply -f <pod_config_file_name> 1 1 Specify the name of the YAML file for the pod configuration. Verify that the Security Context Constraints (SCC) on the pod is anyuid , so that Kopia can connect to the repository. USD oc describe pod/oadp-mustgather-pod | grep scc Example output openshift.io/scc: anyuid Connect to the pod via SSH by running the following command: USD oc -n openshift-adp rsh pod/oadp-mustgather-pod Connect to the Kopia repository by running the following command: sh-5.1# kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<access_key>" \ 4 --secret-access-key="<secret_access_key>" \ 5 --endpoint=<bucket_endpoint> \ 6 1 Specify the object storage provider bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the object storage provider access key. 5 Specify the object storage provider secret access key. 6 Specify the bucket endpoint. You do not need to specify the bucket endpoint, if you are using AWS S3 as the storage provider. Note This is an example command. The command can vary based on the object storage provider. To benchmark the hashing algorithm, run the following command: sh-5.1# kopia benchmark hashing Example output Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256 To benchmark the encryption algorithm, run the following command: sh-5.1# kopia benchmark encryption Example output Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256 To benchmark the splitter algorithm, run the following command: sh-5.1# kopia benchmark splitter Example output splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 # ... FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # ... 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 4.14. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs and CR information by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.14.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Velero website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP and OpenShift Container Platform. 4.14.1.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.3.0 1.12 4.12-4.15 1.3.1 1.12 4.12-4.15 1.3.2 1.12 4.12-4.15 1.3.3 1.12 4.12-4.15 1.3.4 1.12 4.12-4.15 1.3.5 1.12 4.12-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 1.4.3 1.14 4.14-4.18 4.14.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.14.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use the info logLevel value for most logs. 4.14.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.14.5. Pods crash or restart due to lack of memory or CPU If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources. Additional resources CPU and memory requirements 4.14.5.1. Setting resource requests for a Velero pod You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Velero file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi 1 The resourceAllocations listed are for average usage. 4.14.5.2. Setting resource requests for a Restic pod You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Restic file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi 1 The resourceAllocations listed are for average usage. Important The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations , the default resources specification for a Velero pod or a Restic pod is as follows: requests: cpu: 500m memory: 128Mi 4.14.6. PodVolumeRestore fails to complete when StorageClass is NFS The restore operation fails when there is more than one volume during a NFS restore by using Restic or Kopia . PodVolumeRestore either fails with the following error or keeps trying to restore before finally failing. Error message Velero: pod volume restore failed: data path restore failed: \ Failed to run kopia restore: Failed to copy snapshot data to the target: \ restore error: copy file: error creating file: \ open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: \ no such file or directory Cause The NFS mount path is not unique for the two volumes to restore. As a result, the velero lock files use the same file on the NFS server during the restore, causing the PodVolumeRestore to fail. Solution You can resolve this issue by setting up a unique pathPattern for each volume, while defining the StorageClass for nfs-subdir-external-provisioner in the deploy/class.yaml file. Use the following nfs-subdir-external-provisioner StorageClass example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: "USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}" 1 onDelete: delete 1 Specifies a template for creating a directory path by using PVC metadata such as labels, annotations, name, or namespace. To specify metadata, use USD{.PVC.<metadata>} . For example, to name a folder: <pvc-namespace>-<pvc-name> , use USD{.PVC.namespace}-USD{.PVC.name} as pathPattern . 4.14.7. Issues with Velero and admission webhooks Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources. For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use. 4.14.7.1. Restoring workarounds for Velero backups that use admission webhooks This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks. 4.14.7.1.1. Restoring Knative resources You might encounter problems using Velero to back up Knative resources that use admission webhooks. You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks. Procedure Restore the top level service.serving.knavtive.dev Service resource: USD velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev 4.14.7.1.2. Restoring IBM AppConnect resources If you experience issues when you use Velero to a restore an IBM(R) AppConnect resource that has an admission webhook, you can run the checks in this procedure. Procedure Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster: USD oc get mutatingwebhookconfigurations Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation . Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator. 4.14.7.2. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.14.7.2.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.14.7.2.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.14.7.2.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.14.7.2.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.14.7.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. Additional resources Admission plugins Webhook admission plugins Types of webhook admission plugins 4.14.8. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.14.8.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.14.8.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.14.9. OADP Operator issues The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve. 4.14.9.1. OADP Operator fails silently The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace> , you see that the Operator has a status of Running . In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running. Cause The problem is caused when cloud credentials provide insufficient permissions. Solution Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues. Procedure Run one of the following commands to retrieve a list of BSLs: Using the OpenShift CLI: USD oc get backupstoragelocations.velero.io -A Using the Velero CLI: USD velero backup-location get -n <OADP_Operator_namespace> Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error. USD oc get backupstoragelocations.velero.io -n <namespace> -o yaml Example result apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: "2023-11-03T19:49:04Z" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: "24273698" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: "true" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: "2023-11-10T22:06:46Z" message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54" phase: Unavailable kind: List metadata: resourceVersion: "" 4.14.10. OADP timeouts Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures. Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance. The following are various OADP timeouts, with instructions of how and when to implement these parameters: 4.14.10.1. Restic timeout The spec.configuration.nodeAgent.timeout parameter defines the Restic timeout. The default value is 1h . Use the Restic timeout parameter in the nodeAgent section for the following scenarios: For Restic backups with total PV data usage that is greater than 500GB. If backups are timing out with the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" Procedure Edit the values in the spec.configuration.nodeAgent.timeout block of the DataProtectionApplication custom resource (CR) manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h # ... 4.14.10.2. Velero resource timeout resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m . Use the resourceTimeout for the following scenarios: For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete. A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task. To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia. To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup. Procedure Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m # ... 4.14.10.3. Data Mover timeout timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore . The default value is 10m . Use the Data Mover timeout for the following scenarios: If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes. For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h . With the VolumeSnapshotMover (VSM) plugin. Only with OADP 1.1.x. Procedure Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m # ... 4.14.10.4. CSI snapshot timeout CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse , before returning error as timeout. The default value is 10m . Use the CSISnapshotTimeout for the following scenarios: With the CSI plugin. For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs. Note Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes. Procedure Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m # ... 4.14.10.5. Velero default item operation timeout defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h . Use the defaultItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature. When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout , it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections. Procedure Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h # ... 4.14.10.6. Item operation timeout - restore ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h . Use the restore ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h # ... 4.14.10.7. Item operation timeout - backup ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h . Use the backup ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h # ... 4.14.11. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.14.11.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.14.11.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. View the Velero backup details USD velero backup describe <backup-name> --details 4.14.11.3. Backup CR status remains in PartiallyFailed The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created. Cause If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following: time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq Solution Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp If required, clean up the stored data on the BackupStorageLocation to free up space. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object: USD oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true Create a new Backup CR. 4.14.12. Restic issues You might encounter these issues when you back up applications with Restic. 4.14.12.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.nodeAgent.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # ... spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1 # ... 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.14.12.2. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location? . Cause Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information. Solution Remove the related Restic repository from the namespace by running the following command: USD oc delete resticrepository openshift-adp <name_of_the_restic_repository> In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity. time="2021-12-29T18:29:14Z" level=info msg="1 errors encountered backup up item" backup=velero/backup65 logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds time="2021-12-29T18:29:14Z" level=error msg="Error backing up item" backup=velero/backup65 error="pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \n: exit status 1" error.file="/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184" error.function="github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds 4.14.12.3. Restic restore partially failing on OCP 4.14 due to changed PSA policy OpenShift Container Platform 4.14 enforces a Pod Security Admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. If a SecurityContextConstraints (SCC) resource is not found when a pod is created, and the PSA policy on the pod is not set up to meet the required standards, pod admission is denied. This issue arises due to the resource restore order of Velero. Sample error \"level=error\" in line#2273: time=\"2023-06-12T06:50:04Z\" level=error msg=\"error restoring mysql-869f9f44f6-tp5lv: pods\\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity\\\ "restricted:v1.24\\\": privil eged (container \\\"mysql\\\ " must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/restore/restore.go:1388\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n velero container contains \"level=error\" in line#2447: time=\"2023-06-12T06:50:05Z\" level=error msg=\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\ "mysql\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\",\\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n]", Solution In your DPA custom resource (CR), check or set the restore-resource-priorities field on the Velero server to ensure that securitycontextconstraints is listed in order before pods in the list of resources: USD oc get dpa -o yaml Example DPA CR # ... configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift 1 If you have an existing restore resource priority list, ensure you combine that existing list with the complete list. Ensure that the security standards for the application pods are aligned, as provided in Fixing PodSecurity Admission warnings for deployments , to prevent deployment warnings. If the application is not aligned with security standards, an error can occur regardless of the SCC. Note This solution is temporary, and ongoing discussions are in progress to address it. Additional resources Fixing PodSecurity Admission warnings for deployments 4.14.13. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can run the must-gather tool with the following data collection options: Full must-gather data collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. Essential must-gather data collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. must-gather data collection with timeout. Data collection can take a long time if there are many failed Backup CRs. You can improve performance by setting a timeout value. Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. You must use Red Hat Enterprise Linux (RHEL) 9 with OADP 1.4. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: Full must-gather data collection, including Prometheus metrics: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . Essential must-gather data collection, without Prometheus metrics, for a specific time duration: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . must-gather data collection with timeout: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. Prometheus metrics data dump: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Additional resources Gathering cluster data 4.14.13.1. Using must-gather with insecure TLS connections If a custom CA certificate is used, the must-gather pod fails to grab the output for velero logs/describe . To use the must-gather tool with insecure TLS connections, you can pass the gather_without_tls flag to the must-gather command. Procedure Pass the gather_without_tls flag, with value set to true , to the must-gather tool by using the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false> By default, the flag value is set to false . Set the value to true to allow insecure TLS connections. 4.14.13.2. Combining options when using the must-gather tool Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following example: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> In this example, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls . The only other variables that you can specify this way are the following: logs_since , with a default value of 72h request_timeout , with a default value of 0s If DataProtectionApplication custom resource (CR) is configured with s3Url and insecureSkipTLS: true , the CR does not collect the necessary logs because of a missing CA certificate. To collect those logs, run the must-gather command with the following option: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true 4.14.14. OADP Monitoring The OpenShift Container Platform provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs. Additional resources About OpenShift Container Platform monitoring 4.14.14.1. OADP monitoring setup The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end. With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics. Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have created a cluster monitoring config map. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring Add or enable the enableUserWorkload option in the data section's config.yaml field: apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata: # ... 1 Add this option or set to true Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace: USD oc get pods -n openshift-user-workload-monitoring Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring . If it exists, skip the remaining steps in this procedure. USD oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name: Example output apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the 2_configure_user_workload_monitoring.yaml file: USD oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created 4.14.14.2. Creating OADP service monitor OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service. Get details about the service by running the following commands: Procedure Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object. USD oc get svc -n openshift-adp -l app.kubernetes.io/name=velero Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml . The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides. Example ServiceMonitor object apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: "velero" Apply the 3_create_oadp_service_monitor.yaml file: USD oc apply -f 3_create_oadp_service_monitor.yaml Example output servicemonitor.monitoring.coreos.com/oadp-service-monitor created Verification Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console: Navigate to the Observe Targets page. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field. Verify that the status for the Status for the service monitor is Up . Figure 4.1. OADP metrics targets 4.14.14.3. Creating an alerting rule The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring. Procedure Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml . Sample OADPBackupFailing alert apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0 for: 5m labels: severity: warning In this sample, the Alert displays under the following conditions: There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes. If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace: USD oc apply -f 4_create_oadp_alert_rule.yaml Example output prometheusrule.monitoring.coreos.com/sample-oadp-alert created Verification After the Alert is triggered, you can view it in the following ways: In the Developer perspective, select the Observe menu. In the Administrator perspective under the Observe Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed. Figure 4.2. OADP backup failing alert Additional resources Managing alerts as an Administrator 4.14.14.4. List of available metrics These are the list of metrics provided by the OADP together with their Types . Metric name Description Type kopia_content_cache_hit_bytes Number of bytes retrieved from the cache Counter kopia_content_cache_hit_count Number of times content was retrieved from the cache Counter kopia_content_cache_malformed Number of times malformed content was read from the cache Counter kopia_content_cache_miss_count Number of times content was not found in the cache and fetched Counter kopia_content_cache_missed_bytes Number of bytes retrieved from the underlying storage Counter kopia_content_cache_miss_error_count Number of times content could not be found in the underlying storage Counter kopia_content_cache_store_error_count Number of times content could not be saved in the cache Counter kopia_content_get_bytes Number of bytes retrieved using GetContent() Counter kopia_content_get_count Number of times GetContent() was called Counter kopia_content_get_error_count Number of times GetContent() was called and the result was an error Counter kopia_content_get_not_found_count Number of times GetContent() was called and the result was not found Counter kopia_content_write_bytes Number of bytes passed to WriteContent() Counter kopia_content_write_count Number of times WriteContent() was called Counter velero_backup_attempt_total Total number of attempted backups Counter velero_backup_deletion_attempt_total Total number of attempted backup deletions Counter velero_backup_deletion_failure_total Total number of failed backup deletions Counter velero_backup_deletion_success_total Total number of successful backup deletions Counter velero_backup_duration_seconds Time taken to complete backup, in seconds Histogram velero_backup_failure_total Total number of failed backups Counter velero_backup_items_errors Total number of errors encountered during backup Gauge velero_backup_items_total Total number of items backed up Gauge velero_backup_last_status Last status of the backup. A value of 1 is success, 0. Gauge velero_backup_last_successful_timestamp Last time a backup ran successfully, Unix timestamp in seconds Gauge velero_backup_partial_failure_total Total number of partially failed backups Counter velero_backup_success_total Total number of successful backups Counter velero_backup_tarball_size_bytes Size, in bytes, of a backup Gauge velero_backup_total Current number of existent backups Gauge velero_backup_validation_failure_total Total number of validation failed backups Counter velero_backup_warning_total Total number of warned backups Counter velero_csi_snapshot_attempt_total Total number of CSI attempted volume snapshots Counter velero_csi_snapshot_failure_total Total number of CSI failed volume snapshots Counter velero_csi_snapshot_success_total Total number of CSI successful volume snapshots Counter velero_restore_attempt_total Total number of attempted restores Counter velero_restore_failed_total Total number of failed restores Counter velero_restore_partial_failure_total Total number of partially failed restores Counter velero_restore_success_total Total number of successful restores Counter velero_restore_total Current number of existent restores Gauge velero_restore_validation_failed_total Total number of failed restores failing validations Counter velero_volume_snapshot_attempt_total Total number of attempted volume snapshots Counter velero_volume_snapshot_failure_total Total number of failed volume snapshots Counter velero_volume_snapshot_success_total Total number of successful volume snapshots Counter 4.14.14.5. Viewing metrics using the Observe UI You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project. Procedure Navigate to the Observe Metrics page: If you are using the Developer perspective, follow these steps: Select Custom query , or click on the Show PromQL link. Type the query and click Enter . If you are using the Administrator perspective, type the expression in the text field and select Run Queries . Figure 4.3. OADP metrics query 4.15. APIs used with OADP The document provides information about the following APIs that you can use with OADP: Velero API OADP API 4.15.1. Velero API Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types . 4.15.2. OADP API The following tables provide the structure of the OADP API: Table 4.8. DataProtectionApplicationSpec Property Type Description backupLocations [] BackupLocation Defines the list of configurations to use for BackupStorageLocations . snapshotLocations [] SnapshotLocation Defines the list of configurations to use for VolumeSnapshotLocations . unsupportedOverrides map [ UnsupportedImageKey ] string Can be used to override the deployed dependent images for development. Options are veleroImageFqin , awsPluginImageFqin , openshiftPluginImageFqin , azurePluginImageFqin , gcpPluginImageFqin , csiPluginImageFqin , dataMoverImageFqin , resticRestoreImageFqin , kubevirtPluginImageFqin , and operator-type . podAnnotations map [ string ] string Used to add annotations to pods deployed by Operators. podDnsPolicy DNSPolicy Defines the configuration of the DNS of a pod. podDnsConfig PodDNSConfig Defines the DNS parameters of a pod in addition to those generated from DNSPolicy . backupImages * bool Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. configuration * ApplicationConfig Used to define the data protection application's server configuration. features * Features Defines the configuration for the DPA to enable the Technology Preview features. Complete schema definitions for the OADP API . Table 4.9. BackupLocation Property Type Description velero * velero.BackupStorageLocationSpec Location to store volume snapshots, as described in Backup Storage Location . bucket * CloudStorageLocation [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. Important The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Complete schema definitions for the type BackupLocation . Table 4.10. SnapshotLocation Property Type Description velero * VolumeSnapshotLocationSpec Location to store volume snapshots, as described in Volume Snapshot Location . Complete schema definitions for the type SnapshotLocation . Table 4.11. ApplicationConfig Property Type Description velero * VeleroConfig Defines the configuration for the Velero server. restic * ResticConfig Defines the configuration for the Restic server. Complete schema definitions for the type ApplicationConfig . Table 4.12. VeleroConfig Property Type Description featureFlags [] string Defines the list of features to enable for the Velero instance. defaultPlugins [] string The following types of default Velero plugins can be installed: aws , azure , csi , gcp , kubevirt , and openshift . customPlugins [] CustomPlugin Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins restoreResourcesVersionPriority string Represents a config map that is created if defined for use in conjunction with the EnableAPIGroupVersions feature flag. Defining this field automatically adds EnableAPIGroupVersions to the Velero server feature flag. noDefaultBackupLocation bool To install Velero without a default backup storage location, you must set the noDefaultBackupLocation flag in order to confirm installation. podConfig * PodConfig Defines the configuration of the Velero pod. logLevel string Velero server's log level (use debug for the most granular logging, leave unset for Velero default). Valid options are trace , debug , info , warning , error , fatal , and panic . Complete schema definitions for the type VeleroConfig . Table 4.13. CustomPlugin Property Type Description name string Name of custom plugin. image string Image of custom plugin. Complete schema definitions for the type CustomPlugin . Table 4.14. ResticConfig Property Type Description enable * bool If set to true , enables backup and restore using Restic. If set to false , snapshots are needed. supplementalGroups [] int64 Defines the Linux groups to be applied to the Restic pod. timeout string A user-supplied duration string that defines the Restic timeout. Default value is 1hr (1 hour). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . podConfig * PodConfig Defines the configuration of the Restic pod. Complete schema definitions for the type ResticConfig . Table 4.15. PodConfig Property Type Description nodeSelector map [ string ] string Defines the nodeSelector to be supplied to a Velero podSpec or a Restic podSpec . For more details, see Configuring node agents and node labels . tolerations [] Toleration Defines the list of tolerations to be applied to a Velero deployment or a Restic daemonset . resourceAllocations ResourceRequirements Set specific resource limits and requests for a Velero pod or a Restic pod as described in Setting Velero CPU and memory resource allocations . labels map [ string ] string Labels to add to pods. 4.15.2.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" Complete schema definitions for the type PodConfig . Table 4.16. Features Property Type Description dataMover * DataMover Defines the configuration of the Data Mover. Complete schema definitions for the type Features . Table 4.17. DataMover Property Type Description enable bool If set to true , deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to false , these are not deployed. credentialName string User-supplied Restic Secret name for Data Mover. timeout string A user-supplied duration string for VolumeSnapshotBackup and VolumeSnapshotRestore to complete. Default is 10m (10 minutes). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . The OADP API is more fully detailed in OADP Operator . 4.16. Advanced OADP features and functionalities This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP). 4.16.1. Working with different Kubernetes API versions on the same cluster 4.16.1.1. Listing the Kubernetes API group versions on a cluster A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups. If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API. To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1 . Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster. Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources. Procedure Enter the following command: USD oc api-resources 4.16.1.2. About Enable API Group Versions By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions , that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API. Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example , which is example.com/v1 . With the feature enabled, Velero also backs up example.com/v1beta2 . When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions. Note Enable API Group Versions is still in beta. Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority: Preferred version of the destination cluster Preferred version of the source_ cluster Common non-preferred supported version with the highest Kubernetes version priority Additional resources Enable API Group Versions Feature 4.16.1.3. Using Enable API Group Versions You can use Velero's Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one. Note Enable API Group Versions is still in beta. Procedure Configure the EnableAPIGroupVersions feature flag: apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication ... spec: configuration: velero: featureFlags: - EnableAPIGroupVersions Additional resources Enable API Group Versions Feature 4.16.2. Backing up data from one cluster and restoring it to another cluster 4.16.2.1. About backing up data from one cluster and restoring it on another cluster OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster. You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster. To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster: Operators Use of Velero UID and GID ranges 4.16.2.1.1. Operators You must exclude Operators from the backup of an application for backup and restore to succeed. 4.16.2.1.2. Use of Velero Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots. Note In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic . In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup . You must also use Velero's File System Backup to migrate data between AWS regions or between Microsoft Azure regions. Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster. It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources. 4.16.2.2. About determining which pod volumes to back up Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes. Velero supports two approaches for determining pod volumes. Use the opt-in or the opt-out approach to allow Velero to decide between an FSB, a volume snapshot, or a Data Mover backup. Opt-in approach : With the opt-in approach, volumes are backed up using snapshot or Data Mover by default. FSB is used on specific volumes that are opted-in by annotations. Opt-out approach : With the opt-out approach, volumes are backed up using FSB by default. Snapshots or Data Mover is used on specific volumes that are opted-out by annotations. 4.16.2.2.1. Limitations FSB does not support backing up and restoring hostpath volumes. However, FSB does support backing up and restoring local volumes. Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data . It is essential that you limit access to backup storage. For PVCs, every incremental backup chain is maintained across pod reschedules. For pod volumes that are not PVCs, such as emptyDir volumes, if a pod is deleted or recreated, for example, by a ReplicaSet or a deployment, the backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod. Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up. FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup.. FSB expects volumes to be mounted under <hostPath>/<pod UID> , with <hostPath> being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the <pod UID> subdirectory, and VFSB does not work with them as expected. 4.16.2.2.2. Backing up pod volumes by using the opt-in method You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command. Procedure On each pod that contains one or more volumes that you want to back up, enter the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. 4.16.2.2.3. Backing up pod volumes by using the opt-out method When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions: Volumes that mount the default service account token, secrets, and configuration maps. hostPath volumes You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command. Procedure On each pod that contains one or more volumes that you do not want to back up, run the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. Note You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag. 4.16.2.3. UID and GID ranges If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations: Summary of the issues The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift's UID and GID ranges, see A Guide to OpenShift and UIDs . Detailed description of the issues When you create a namespace in OpenShift Container Platform by using the shell command oc create namespace , OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in the metadata.annotations field of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true: There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore. A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace. This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a securityContext UID to a pod based on namespace annotations that have changed since the persistent volume data was backed up. The UID of the container no longer matches the UID of the file owner. An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster. Mitigations You can use one or more of the following mitigations to resolve the UID and GID range issues: Simple mitigations: If you use a label selector in the Backup CR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name. Advanced mitigations: Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration . Step 1 is optional. For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs . 4.16.2.4. Backing up data from one cluster and restoring it to another cluster In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another. Prerequisites All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide. Procedure Make the following additions to the procedures given for your platform: Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster. Share the same object storage location credentials across the clusters. For best results, use OADP to create the namespace on the destination cluster. If you use the Velero file-system-backup option, enable the --default-volumes-to-fs-backup flag for use during backup by running the following command: USD velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options> Note In OADP 1.2 and later, the Velero Restic option is called file-system-backup . Important Before restoring a CSI back up, edit the VolumeSnapshotClass custom resource (CR), and set the snapshot.storage.kubernetes.io/is-default-class parameter to false. Otherwise, the restore will partially fail due to the same value in the VolumeSnapshotClass in the target cluster for the same drive. 4.16.3. OADP storage class mapping 4.16.3.1. Storage class mapping Storage class mapping allows you to define rules or policies specifying which storage class should be applied to different types of data. This feature automates the process of determining storage classes based on access frequency, data importance, and cost considerations. It optimizes storage efficiency and cost-effectiveness by ensuring that data is stored in the most suitable storage class for its characteristics and usage patterns. You can use the change-storage-class-config field to change the storage class of your data objects, which lets you optimize costs and performance by moving data between different storage tiers, such as from standard to archival storage, based on your needs and access patterns. 4.16.3.1.1. Storage class mapping with Migration Toolkit for Containers You can use the Migration Toolkit for Containers (MTC) to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster and for storage class mapping and conversion. You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the MTC web console. 4.16.3.1.2. Mapping storage classes with OADP You can use OpenShift API for Data Protection (OADP) with the Velero plugin v1.1.0 and later to change the storage class of a persistent volume (PV) during restores, by configuring a storage class mapping in the config map in the Velero namespace. To deploy ConfigMap with OADP, use the change-storage-class-config field. You must change the storage class mapping based on your cloud provider. Procedure Change the storage class mapping by running the following command: USD cat change-storageclass.yaml Create a config map in the Velero namespace as shown in the following example: Example apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: "" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi Save your storage class mapping preferences by running the following command: USD oc create -f change-storage-class-config 4.16.4. Additional resources Working with different Kubernetes API versions on the same cluster . Using Data Mover for CSI snapshots . Backing up applications with File System Backup: Kopia or Restic . Migration converting storage classes .
|
[
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 6 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/backup_and_restore/oadp-application-backup-and-restore
|
Part III. Advanced Clair configuration
|
Part III. Advanced Clair configuration Use this section to configure advanced Clair features.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/advanced-clair-configuration
|
4.360. xorg-x11-server-utils
|
4.360. xorg-x11-server-utils 4.360.1. RHBA-2011:1617 - xorg-x11-server-utils bug fix and enhancement update Updated xorg-x11-server-utils packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The xorg-x11-server-utils package contains a collection of utilities used to modify and query the runtime configuration of the X.Org server. X.Org is an open source implementation of the X Window System. The xorg-x11-server-utils packages have been upgraded to upstream version 7.1 which provides a number of bug fixes and enhancements over the version. (BZ# 713862 ) Bug Fixes BZ# 657554 Previously, the xrandr options --scale and -- transform caused a segmentation fault, because these options required an --output field but the utility didn't validate the command line properly check for existing output. With this update, xrandr displays a message informing that the --scale and -- transform options also require the --output option. BZ# 740146 Previously, xrandr wrongly assumed that a gamma ramp value of zero was a failure. When VNC was enabled, xrandr returned a zero value. With this update, xrandr is modified to allow for a gamma ramp value of zero. Now xrandr no longer fails when running VNC. All users of xorg-x11-server-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xorg-x11-server-utils
|
probe::sunrpc.clnt.call_sync
|
probe::sunrpc.clnt.call_sync Name probe::sunrpc.clnt.call_sync - Make a synchronous RPC call Synopsis sunrpc.clnt.call_sync Values xid current transmission id servername the server machine name flags flags dead whether this client is abandoned prog the RPC program number port the port number prot the IP protocol number progname the RPC program name vers the RPC program version number proc the procedure number in this RPC call procname the procedure name in this RPC call
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-call-sync
|
Chapter 6. Updating the OpenShift Data Foundation external secret
|
Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.15.x to 4.15.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.15.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script. Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster. Run the previously downloaded python script: Note Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. Ensure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port , are the same as that you used during the original deployment of OpenShift Data Foundation in external mode. --rbd-data-pool-name Is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. Additional flags: rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
|
[
"oc get csv USD(oc get csv -n openshift-storage | grep ocs-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\\.features\\.ocs\\.openshift\\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade",
"client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/updating-the-openshift-data-foundation-external-secret_rhodf
|
Chapter 26. Using the container-tools API
|
Chapter 26. Using the container-tools API The new REST based Podman 2.0 API replaces the old remote API for Podman that used the varlink library. The new API works in both a rootful and a rootless environment. The Podman v2.0 RESTful API consists of the Libpod API providing support for Podman, and Docker-compatible API. With this new REST API, you can call Podman from platforms such as cURL, Postman, Google's Advanced REST client, and many others. Note As the podman service supports socket activation, unless connections on the socket are active, podman service will not run. Hence, to enable socket activation functionality, you need to manually start the podman.socket service. When a connection becomes active on the socket, it starts the podman service and runs the requested API action. Once the action is completed, the podman process ends, and the podman service returns to an inactive state. 26.1. Enabling the Podman API using systemd in root mode You can do the following: Use systemd to activate the Podman API socket. Use a Podman client to perform basic commands. Prerequisities The podman-remote package is installed. Procedure Start the service immediately: To enable the link to var/lib/docker.sock using the docker-podman package: Verification Display system information of Podman: Verify the link: Additional resources Podman v2.0 RESTful API A First Look At Podman 2.0 API Sneak peek: Podman's new REST API 26.2. Enabling the Podman API using systemd in rootless mode You can use systemd to activate the Podman API socket and podman API service. Prerequisites The podman-remote package is installed. Procedure Enable and start the service immediately: Optional: To enable programs using Docker to interact with the rootless Podman socket: Verification Check the status of the socket: The podman.socket is active and is listening at /run/user/ <uid> /podman.podman.sock , where <uid> is the user's ID. Display system information of Podman: Additional resources Podman v2.0 RESTful API A First Look At Podman 2.0 API Sneak peek: Podman's new REST API Exploring Podman RESTful API using Python and Bash 26.3. Running the Podman API manually You can run the Podman API. This is useful for debugging API calls, especially when using the Docker compatibility layer. Prerequisities The podman-remote package is installed. Procedure Run the service for the REST API: The value of 0 means no timeout. The default endpoint for a rootful service is unix:/run/podman/podman.sock . The --log-level <level> option sets the logging level. The standard logging levels are debug , info , warn , error , fatal , and panic . In another terminal, display system information of Podman. The podman-remote command, unlike the regular podman command, communicates through the Podman socket: To troubleshoot the Podman API and display request and responses, use the curl comman. To get the information about the Podman installation on the Linux server in JSON format: A jq utility is a command-line JSON processor. Pull the registry.access.redhat.com/ubi8/ubi container image: Display the pulled image: Additional resources Podman v2.0 RESTful API Sneak peek: Podman's new REST API Exploring Podman RESTful API using Python and Bash podman-system-service man page on your system
|
[
"dnf install podman-remote",
"systemctl enable --now podman.socket",
"dnf install podman-docker",
"podman-remote info",
"ls -al /var/run/docker.sock lrwxrwxrwx. 1 root root 23 Nov 4 10:19 /var/run/docker.sock -> /run/podman/podman.sock",
"dnf install podman-remote",
"systemctl --user enable --now podman.socket",
"export DOCKER_HOST=unix:///run/user/<uid>/podman//podman.sock",
"systemctl --user status podman.socket ● podman.socket - Podman API Socket Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled) Active: active (listening) since Mon 2021-08-23 10:37:25 CEST; 9min ago Docs: man:podman-system-service(1) Listen: /run/user/1000/podman/podman.sock (Stream) CGroup: /user.slice/user-1000.slice/[email protected]/podman.socket",
"podman-remote info",
"dnf install podman-remote",
"podman system service -t 0 --log-level=debug",
"podman-remote info",
"curl -s --unix-socket /run/podman/podman.sock http://d/v1.0.0/libpod/info | jq { \"host\": { \"arch\": \"amd64\", \"buildahVersion\": \"1.15.0\", \"cgroupVersion\": \"v1\", \"conmon\": { \"package\": \"conmon-2.0.18-1.module+el8.3.0+7084+c16098dd.x86_64\", \"path\": \"/usr/bin/conmon\", \"version\": \"conmon version 2.0.18, commit: 7fd3f71a218f8d3a7202e464252aeb1e942d17eb\" }, ... \"version\": { \"APIVersion\": 1, \"Version\": \"2.0.0\", \"GoVersion\": \"go1.14.2\", \"GitCommit\": \"\", \"BuiltTime\": \"Thu Jan 1 01:00:00 1970\", \"Built\": 0, \"OsArch\": \"linux/amd64\" } }",
"curl -XPOST --unix-socket /run/podman/podman.sock -v 'http://d/v1.0.0/images/create?fromImage=registry.access.redhat.com%2Fubi8%2Fubi' * Trying /run/podman/podman.sock * Connected to d (/run/podman/podman.sock) port 80 (#0) > POST /v1.0.0/images/create?fromImage=registry.access.redhat.com%2Fubi8%2Fubi HTTP/1.1 > Host: d > User-Agent: curl/7.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:58:37 GMT < Content-Length: 231 < {\"status\":\"pulling image () from registry.access.redhat.com/ubi8/ubi:latest, registry.redhat.io/ubi8/ubi:latest\",\"error\":\"\",\"progress\":\"\",\"progressDetail\":{},\"id\":\"ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e\"} * Connection #0 to host d left intact",
"curl --unix-socket /run/podman/podman.sock -v 'http://d/v1.0.0/libpod/images/json' | jq * Trying /run/podman/podman.sock % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to d (/run/podman/podman.sock) port 80 ( 0) > GET /v1.0.0/libpod/images/json HTTP/1.1 > Host: d > User-Agent: curl/7.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:59:55 GMT < Transfer-Encoding: chunked < { [12498 bytes data] 100 12485 0 12485 0 0 2032k 0 --:--:-- --:--:-- --:--:-- 2438k * Connection #0 to host d left intact [ { \"Id\": \"ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e\", \"RepoTags\": [ \"registry.access.redhat.com/ubi8/ubi:latest\", \"registry.redhat.io/ubi8/ubi:latest\" ], \"Created\": \"2020-09-01T19:44:12.470032Z\", \"Size\": 210838671, \"Labels\": { \"architecture\": \"x86_64\", \"build-date\": \"2020-09-01T19:43:46.041620\", \"com.redhat.build-host\": \"cpt-1008.osbs.prod.upshift.rdu2.redhat.com\", ... \"maintainer\": \"Red Hat, Inc.\", \"name\": \"ubi8\", ... \"summary\": \"Provides the latest release of Red Hat Universal Base Image 8.\", \"url\": \"https://access.redhat.com/containers/ /registry.access.redhat.com/ubi8/images/8.2-347\", }, \"Names\": [ \"registry.access.redhat.com/ubi8/ubi:latest\", \"registry.redhat.io/ubi8/ubi:latest\" ], ] } ]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/using-the-container-tools-api
|
4.5. REDUNDANCY
|
4.5. REDUNDANCY The REDUNDANCY panel allows you to configure of the backup LVS router node and set various heartbeat monitoring options. Note The first time you visit this screen, it displays an "inactive" Backup status and an ENABLE button. To configure the backup LVS router, click on the ENABLE button so that the screen matches Figure 4.4, "The REDUNDANCY Panel" . Figure 4.4. The REDUNDANCY Panel Redundant server public IP Enter the public real IP address for the backup LVS router node. Redundant server private IP Enter the backup node's private real IP address in this text field. If you do not see the field called Redundant server private IP , go back to the GLOBAL SETTINGS panel and enter a Primary server private IP address and click ACCEPT . The rest of the panel is devoted to configuring the heartbeat channel, which is used by the backup node to monitor the primary node for failure. Heartbeat Interval (seconds) This field sets the number of seconds between heartbeats - the interval that the backup node will check the functional status of the primary LVS node. Assume dead after (seconds) If the primary LVS node does not respond after this number of seconds, then the backup LVS router node will initiate failover. Heartbeat runs on port This field sets the port at which the heartbeat communicates with the primary LVS node. The default is set to 539 if this field is left blank. Warning Remember to click the ACCEPT button after making any changes in this panel to make sure you do not lose any changes when selecting a new panel.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-redun-VSA
|
Specialized hardware and driver enablement
|
Specialized hardware and driver enablement OpenShift Container Platform 4.13 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.13.z-x86_64 --image-for=driver-toolkit",
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.13.z-aarch64 --image-for=driver-toolkit",
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404",
"podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>",
"oc new-project simple-kmod-demo",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo",
"OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})",
"DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)",
"sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml",
"oc create -f 0000-buildconfig.yaml",
"apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create -f 1000-drivercontainer.yaml",
"oc get pod -n simple-kmod-demo",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s",
"oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.13 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc apply -f kmm-security-constraint.yaml",
"oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default",
"spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b",
"oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]",
"spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModuleToRemove: mod_b",
"spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 inTreeModuleToRemove: <module_name>",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8",
"ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv",
"oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>",
"oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>",
"cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64",
"cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64",
"apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>",
"oc apply -f <yaml_filename>",
"oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text",
"oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d",
"--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'",
"--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-",
"oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>",
"ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)",
"kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1",
"metadata: labels: machineconfiguration.opensfhit.io/role: master",
"metadata: labels: machineconfiguration.opensfhit.io/role: worker",
"modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'",
"FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather",
"oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager",
"I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u",
"oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager",
"I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\""
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/specialized_hardware_and_driver_enablement/index
|
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11
|
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 Before migrating your Java applications from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11, familiarize yourself with the changes in Red Hat build of OpenJDK 11. These changes might require that you reconfigure your existing Red Hat build of OpenJDK installation before you migrate to version 11. One of the major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 is the inclusion of a module system in Red Hat build of OpenJDK 11. If you are migrating from Red Hat build of OpenJDK 8, consider moving your application's libraries and modules from the Red Hat build of OpenJDK 8 class path to the module class in Red Hat build of OpenJDK 11. This change can improve the class-loading capabilities of your application. Red Hat build of OpenJDK 11 includes new features and enhancements that can improve the performance of your application, such as enhanced memory usage, improved startup speed, and increased container integration. Note Some features might differ between Red Hat build of OpenJDK and other upstream community or third-party versions of OpenJDK. For example: The Shenandoah garbage collector is available in all versions of Red Hat build of OpenJDK, but this feature might not be available by default in other builds of OpenJDK. JDK Flight Recorder (JFR) support in OpenJDK 8 has been available from version 8u262 onward and enabled by default from version 8u272 onward, but JFR might be disabled in certain builds. Because JFR functionality was backported from the open source version of JFR in OpenJDK 11, the JFR implementation in Red Hat build of OpenJDK 8 is largely similar to JFR in Red Hat build of OpenJDK 11 or later. This JFR implementation is different from JFR in Oracle JDK 8, so users who want to migrate from Oracle JDK to Red Hat build of OpenJDK 8 or later need to be aware of the command-line options for using JFR. 32-bit builds of OpenJDK are generally unsupported in OpenJDK 8 or later, and they might not be available in later versions. 32-bit builds are unsupported in all versions of Red Hat build of OpenJDK. 2.1. Cryptography and security Certain minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11. However, both versions of Red Hat build of OpenJDK have many similar cryptography and security behaviors. Red Hat builds of OpenJDK use system-wide certificates, and each build obtains its list of disabled cryptographic algorithms from a system's global configuration settings. These settings are common to all versions of Red Hat build of OpenJDK, so you can easily change from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11. In FIPS mode, Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 releases are self-configured, so that either release uses the same security providers at startup. The TLS stacks in Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 are identical, because the SunJSSE engine from Red Hat build of OpenJDK 11 was backported to Red Hat build of OpenJDK 8. Both Red Hat build of OpenJDK versions support the TLS 1.3 protocol. The following minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11: Red Hat build of OpenJDK 8 Red Hat build of OpenJDK 11 TLS clients do not use TLSv1.3 for communication with the target server by default. You can change this behavior by setting the jdk.tls.client.protocols system property to ‐Djdk.tls.client.protocols=TLSv1.3 . TLS clients use TLSv.1.3 by default. This release does not support the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release supports the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release still supports the legacy KRB5-based cipher suites, which are disabled for security reasons. You can enable these cipher suites by changing the jdk.tls.client.cipherSuites and jdk.tls.server.cipherSuites system properties. This release does not support the legacy KRB5-based cipher suites. This release does not support the Datagram Transport Layer Security (DTLS) protocol. This release supports the DTLS protocol. The max_fragment_length extension, which is used by DTLS, is not available for TLS clients. The max_fragment_length extension is available for both clients and servers. 2.2. Garbage collector For garbage collection, Red Hat build of OpenJDK 8 uses the Parallel collector by default, whereas Red Hat build of OpenJDK 11 uses the Garbage-First (G1) collector by default. Before you choose a garbage collector, consider the following details: If you want to improve throughput, use the Parallel collector. The Parallel collector maximizes throughput but ignores latency, which means that garbage collection pauses could become an issue if you want your application to have reasonable response times. However, if your application is performing batch processing and you are not concerned about pause times, the Parallel collector is the best choice. You can switch to the Parallel collector by setting the ‐XX:+UseParallelGC JVM option. If you want a balance between throughput and latency, use the G1 collector. The G1 collector can achieve great throughput while providing reasonable latencies with pause times of a few hundred milliseconds. If you notice throughput issues when migrating applications from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11, you can switch to the Parallel collector as described above. If you want low-latency garbage collection, use the Shenandoah collector. You can select the garbage collector type that you want to use by specifying the ‐XX:+<gc_type> JVM option at startup. For example, the ‐XX:+UseParallelGC option switches to the Parallel collector. 2.3. Garbage collector logging options Red Hat build of OpenJDK 11 includes a new and more powerful logging framework that works more effectively than the old logging framework. Red Hat build of OpenJDK 11 also includes unified JVM logging options and unified GC logging options. The logging system for Red Hat build of OpenJDK 11 activates the - XX:+PrintGCTimeStamps and -XX:+PrintGCDateStamps options by default. Because the logging format in Red Hat build of OpenJDK 11 is different from Red Hat build of OpenJDK 8, you might need to update any of your code that parses garbage collector logs. Modified options in Red Hat build of OpenJDK 11 The old logging framework options are deprecated in Red Hat build of OpenJDK 11. These old options are still available only as aliases for the new logging framework options. If you want to work more effectively with Red Hat build of OpenJDK 11, use the new logging framework options. The following table outlines the changes in garbage collector logging options between Red Hat build of OpenJDK versions 8 and 11: Options in Red Hat build of OpenJDK 8 Options in Red Hat build of OpenJDK 11 -verbose:gc -Xlog:gc -XX:+PrintGC -Xlog:gc -XX:+PrintGCDetails -Xlog:gc* or -Xlog:gc+USDtags -Xloggc:USDFILE -Xlog:gc:file=USDFILE When using the -XX:+PrintGCDetails option, pass the -Xlog:gc* flag, where the asterisk ( * ) activates more detailed logging. Alternatively, you can pass the -Xlog:gc+USDtags flag. When using the -Xloggc option, append the :file=USDFILE suffix to redirect log output to the specified file. For example -Xlog:gc:file=USDFILE . Removed options in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 does not include the following options, which were deprecated in Red Hat build of OpenJDK 8: -Xincgc -XX:+CMSIncrementalMode -XX:+UseCMSCompactAtFullCollection -XX:+CMSFullGCsBeforeCompaction -XX:+UseCMSCollectionPassing Red Hat build of OpenJDK 11 also removes the following options because the printing of timestamps and datestamps is automatically enabled: -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps Note In Red Hat build of OpenJDK 11, unless you specify the -XX:+IgnoreUnrecognizedVMOptions option, the use of any of the preceding removed options results in a startup failure. Additional resources For more information about the common framework for unified JVM logging and the format of Xlog options, see JEP 158: Unified JVM Logging . For more information about deprecated and removed options, see JEP 214: Remove GC Combinations Deprecated in JDK 8 . For more information about unified GC logging, see JEP 271: Unified GC Logging . 2.4. OpenJDK graphics Before version 8u252, Red Hat build of OpenJDK 8 used Pisces as the default rendering engine. From version 8u252 onward, Red Hat build of OpenJDK 8 uses Marlin as the new default rendering engine. Red Hat build of OpenJDK 11 and later releases also use Marlin by default. Marlin improves the handling of intensive application graphics. Because the rendering engines produce the same results, users should not observe any changes apart from improved performance. 2.5. Webstart and applets You can use Java WebStart by using the IcedTea-Web plug-in with Red Hat build of OpenJDK 8 or Red Hat build of OpenJDK 11 on RHEL 7, RHEL 8, and Microsoft Windows operating systems. The IcedTea-Web plug-in requires that Red Hat build of OpenJDK 8 is installed as a dependency on the system. Applets are not supported on any version of Red Hat build of OpenJDK. Even though some applets can be run on RHEL 7 by using the IcedTea-web plug-in with OpenJDK 8 on a Netscape Plugin Application Programming Interface (NPAPI) browser, Red Hat build of OpenJDK does not support this behavior. Note The upstream community version of OpenJDK does not support applets or Java Webstart. Support for these technologies is deprecated and they are not recommended for use. 2.6. JPMS The Java Platform Module System (JPMS), which was introduced in OpenJDK 9, limits or prevents access to non-public APIs. JPMS also impacts how you can start and compile your Java application (for example, whether you use a class path or a module path). Internal modules By default, Red Hat build of OpenJDK 11 restricts but still permits access to JDK internal modules. This means that most applications can continue to work without requiring changes, but these applications will emit a warning. As a workaround for this restriction, you can enable your application to access an internal package by passing a ‐‐add-opens <module-name>/<package-in-module>=ALL-UNNAMED option to the java command. For example: Additionally, you can check illegal access cases by passing the ‐‐illegal-access=warn option to the java command. This option changes the default behavior of Red Hat build of OpenJDK. ClassLoader The JPMS refactoring changes the ClassLoader hierarchy in Red Hat build of OpenJDK 11. In Red Hat build of OpenJDK 11, the system class loader is no longer an instance of URLClassLoader . Existing application code that invokes ClassLoader::getSystemClassLoader and casts the result to a URLClassLoader in Red Hat build of OpenJDK 11 will result in a runtime exception. In Red Hat build of OpenJDK 8, when you create a class loader, you can pass null as the parent of this class loader instance. However, in Red Hat build of OpenJDK 11, applications that pass null as the parent of a class loader might prevent the class loader from locating platform classes. Red Hat build of OpenJDK 11 includes a new class loader that can control the loading of certain classes. This improves the way that a class loader can locate all of its required classes. In Red Hat build of OpenJDK 11, when you create a class loader instance, you can set the platform class loader as its parent by using the ClassLoader.getPlatformClassLoader() API. Additional resources For more information about JPMS, see JEP 261: Module System . 2.7. Extension and endorsed override mechanisms In Red Hat build of OpenJDK 11, both the extension mechanism, which supported optional packages, and the endorsed standards override mechanism are no longer available. These changes mean that any libraries that are added to the <JAVA_HOME>/lib/ext or <JAVA_HOME>/lib/endorsed directory are no longer used, and Red Hat build of OpenJDK 11 generates an error if these directories exist. Additional resources For more information about the removed mechanisms, see JEP 220: Modular Run-Time Images . 2.8. JFR functionality JDK Flight Recorder (JFR) support was backported to Red Hat build of OpenJDK 8 starting from version 8u262. JFR support was subsequently enabled by default from Red Hat build of OpenJDK 8u272 onward. Note The term backporting describes when Red Hat takes an update from a more recent version of upstream software and applies that update to an older version of the software that Red Hat distributes. Backported JFR features The JFR backport to Red Hat build of OpenJDK 8 included all of the following features: A large number of events that are also available in Red Hat build of OpenJDK 11 Command-line tools such as jfr and the Java diagnostic command ( jcmd ) that behave consistently across Red Hat build of OpenJDK versions 8 and 11 The Java Management Extensions (JMX) API that you can use to enable JFR by using the JMX beans interfaces either programmatically or through jcmd The jdk.jfr namespace Note The JFR APIs in the jdk.jfr namespace are not considered part of the Java specification in Red Hat build of OpenJDK 8, but these APIs are part of the Java specification in Red Hat build of OpenJDK 11. Because the JFR API is available in all supported Red Hat build of OpenJDK versions, applications that use JFR do not require any special configuration to use the JFR APIs in Red Hat build of OpenJDK 8 and later versions. JDK Mission Control, which is distributed separately, was also updated to be compatible with Red Hat build of OpenJDK 8. Applications that need to be compatible with other OpenJDK versions If your applications need to be compatible with any of the following OpenJDK versions, you might need to adapt these applications: OpenJDK versions earlier than 8u262 OpenJDK versions from other vendors that do not support JFR Oracle JDK To aid this effort, Red Hat has developed a special compatibility layer that provides an empty implementation of JFR, which behaves as if JFR was disabled at runtime. For more information about the JFR compatibility API, see openjdk8-jfr-compat . You can install the resulting .jar file in the jre/lib/ext directory of an OpenJDK 8 distribution. Some applications might need to be updated if these applications were filtering out OpenJDK 8 by checking only for the version number instead of querying the MBeans interface. 2.9. JRE and headless packages All Red Hat build of OpenJDK versions for RHEL platforms are separated into the following types of packages. The following list of package types is sorted in order of minimality, starting with the most minimal. Java Runtime Environment (JRE) headless Provides the library only without support for graphical user interface but supports offline rendering of images JRE Adds the necessary libraries to run for full graphical clients JDK Includes tooling and compilers Red Hat build of OpenJDK versions for Windows platforms do not support headless packages. However, the Red Hat build of OpenJDK packages for Windows platforms are also divided into JRE and JDK components, similar to the packages for RHEL platforms. Note The upstream community version of OpenJDK 11 or later does not separate packages in this way and instead provides one monolithic JDK installation. OpenJDK 9 introduced a modularised version of the JDK class libraries divided by their namespaces. From Red Hat build of OpenJDK 11 onward, these libraries are packaged into jmods modules. For more information, see Jmods . 2.10. Jmods OpenJDK 9 introduced jmods , which is a modularized version of the JDK class libraries, where each module groups classes from a set of related packages. You can use the jlink tool to create derivative runtimes that include only some subset of the modules that are needed to run selected applications. From Red Hat build of OpenJDK 11 onward, Red Hat build of OpenJDK versions for RHEL platforms place the jmods files into a separate RPM package that is not installed by default. If you want to create standalone OpenJDK images for your applications by using jlink , you must manually install the jmods package (for example, java-11-openjdk-jmods ). Note On RHEL platforms, OpenJDK is dynamically linked against system libraries, which means the resulting jlink images are not portable across different versions of RHEL or other systems. If you want to ensure portability, you must use the portable builds of Red Hat build of OpenJDK that are released through the Red Hat Customer Portal. For more information, see Installing Red Hat build of OpenJDK on RHEL by using an archive . 2.11. Deprecated and removed functionality in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 has either deprecated or removed some features that Red Hat build of OpenJDK 8 supports. CORBA Red Hat build of OpenJDK 11 does not support the following Common Object Request Broker Architecture (CORBA) tools: Idlj orbd servertool tnamesrv Logging framework Red Hat build of OpenJDK 11 does not support the following APIs: java.util.logging.LogManager.addPropertyChangeListener java.util.logging.LogManager.removePropertyChangeListener java.util.jar.Pack200.Packer.addPropertyChangeListener java.util.jar.Pack200.Packer.removePropertyChangeListener java.util.jar.Pack200.Unpacker.addPropertyChangeListener java.util.jar.Pack200.Unpacker.removePropertyChangeListener Java EE modules Red Hat build of OpenJDK 11 does not support the following APIs: java.activation java.corba java.se.ee (aggregator) java.transaction java.xml.bind java.xml.ws java.xml.ws.annotation java.awt.peer Red Hat build of OpenJDK 11 sets the java.awt.peer package as internal, which means that applications cannot automatically access this package by default. Because of this change, Red Hat build of OpenJDK 11 removed a number of classes and methods that refer to the peer API, such as the Component.getPeer method. The following list outlines the most common use cases for the peer API: Writing of new graphics ports Checking if a component can be displayed Checking if a component is either lightweight or backed by an operating system native UI component resource such as an Xlib XWindow From Java 1.1 onward, the Component.isDisplayable() method provides the functionality to check whether a component can be displayed. From Java 1.2 onward, the Component.isLightweight() method provides the functionality to check whether a component is lightweight. javax.security and java.lang APIs Red Hat build of OpenJDK 11 does not support the following APIs: javax.security.auth.Policy java.lang.Runtime.runFinalizersOnExit(boolean) java.lang.SecurityManager.checkAwtEventQueueAccess() java.lang.SecurityManager.checkMemberAccess(java.lang.Class,int) java.lang.SecurityManager.checkSystemClipboardAccess() java.lang.SecurityManager.checkTopLevelWindow(java.lang.Object) java.lang.System.runFinalizersOnExit(boolean) java.lang.Thread.destroy() java.lang.Thread.stop(java.lang.Throwable) Sun.misc The sun.misc package has always been considered internal and unsupported. In Red Hat build of OpenJDK 11, the following packages are deprecated or removed: sun.misc.BASE64Encoder sun.misc.BASE64Decoder sun.misc.Unsafe sun.reflect.Reflection Consider the following information: Red Hat build of OpenJDK 8 added the java.util.Base64 package as a replacement for the sun.misc.BASE64Encoder and sun.misc.BASE64Decoder APIs. You can use the java.util.Base64 package rather than these APIs, which have been removed from Red Hat build of OpenJDK 11. Red Hat build of OpenJDK 11 deprecates the sun.misc.Unsafe package, which is scheduled for removal. For more information about a new set of APIs that you can use as a replacement for sun.misc.Unsafe , see JEP 193 . Red Hat build of OpenJDK 11 removes the sun.reflect.Reflection package. For more information about new functionality for stack walking that replaces the sun.reflect.Reflection.getCallerClass method, see JEP 259 . Additional resources For more information about the removed Java EE and CORBA modules and the potential replacements for these modules, see JEP 320: Remove the Java EE and CORBA Modules . 2.12. Additional resources (or steps) For more information about Red Hat build of OpenJDK 8 features, see JDK 8 Features . For more information about OpenJDK 9 features inherited by Red Hat build of OpenJDK 11, see JDK 9 . For more information about OpenJDK 10 features inherited by Red Hat build of OpenJDK 11, see JDK 10 . For more information about Red Hat build of OpenJDK 11 features, see JDK 11 . For more information about a list of all available JEPs, see JEP 0: JEP Index .
|
[
"--add-opens java.base/jdk.internal.math=ALL-UNNAMED"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/migrating_red_hat_build_of_openjdk_8_to_red_hat_build_of_openjdk_11/differences_8_11
|
Chapter 14. SubjectAccessReview [authorization.k8s.io/v1]
|
Chapter 14. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 14.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 14.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 14.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 14.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 14.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 14.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 14.2.1. /apis/authorization.k8s.io/v1/subjectaccessreviews Table 14.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SubjectAccessReview Table 14.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 14.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/subjectaccessreview-authorization-k8s-io-v1
|
4.4. Configuring Multiple Monitors
|
4.4. Configuring Multiple Monitors 4.4.1. Configuring Multiple Displays for Red Hat Enterprise Linux Virtual Machines A maximum of four displays can be configured for a single Red Hat Enterprise Linux virtual machine when connecting to the virtual machine using the SPICE protocol. Start a SPICE session with the virtual machine. Open the View drop-down menu at the top of the SPICE client window. Open the Display menu. Click the name of a display to enable or disable that display. Note By default, Display 1 is the only display that is enabled on starting a SPICE session with a virtual machine. If no other displays are enabled, disabling this display will close the session. 4.4.2. Configuring Multiple Displays for Windows Virtual Machines A maximum of four displays can be configured for a single Windows virtual machine when connecting to the virtual machine using the SPICE protocol. Click Compute Virtual Machines and select a virtual machine. With the virtual machine in a powered-down state, click Edit . Click the Console tab. Select the number of displays from the Monitors drop-down list. Note This setting controls the maximum number of displays that can be enabled for the virtual machine. While the virtual machine is running, additional displays can be enabled up to this number. Click OK . Start a SPICE session with the virtual machine. Open the View drop-down menu at the top of the SPICE client window. Open the Display menu. Click the name of a display to enable or disable that display. Note By default, Display 1 is the only display that is enabled on starting a SPICE session with a virtual machine. If no other displays are enabled, disabling this display will close the session.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-configuring_multiple_monitors
|
Chapter 7. Checking integrity with AIDE
|
Chapter 7. Checking integrity with AIDE Advanced Intrusion Detection Environment (AIDE) is a utility that creates a database of files on the system, and then uses that database to ensure file integrity and detect system intrusions. 7.1. Installing AIDE To start file-integrity checking with AIDE, you must install the corresponding package and initiate the AIDE database. Prerequisites The AppStream repository is enabled. Procedure Install the aide package: Generate an initial database: Optional: In the default configuration, the aide --init command checks just a set of directories and files defined in the /etc/aide.conf file. To include additional directories or files in the AIDE database, and to change their watched parameters, edit /etc/aide.conf accordingly. To start using the database, remove the .new substring from the initial database file name: Optional: To change the location of the AIDE database, edit the /etc/aide.conf file and modify the DBDIR value. For additional security, store the database, configuration, and the /usr/sbin/aide binary file in a secure location such as a read-only media. 7.2. Performing integrity checks with AIDE You can use the crond service to schedule regular file-integrity checks with AIDE. Prerequisites AIDE is properly installed and its database is initialized. See Installing AIDE Procedure To initiate a manual check: At a minimum, configure the system to run AIDE weekly. Optimally, run AIDE daily. For example, to schedule a daily execution of AIDE at 04:05 a.m. by using the cron command, add the following line to the /etc/crontab file: Additional resources cron(8) man page on your system 7.3. Updating an AIDE database After verifying the changes of your system, such as package updates or configuration files adjustments, update also your baseline AIDE database. Prerequisites AIDE is properly installed and its database is initialized. See Installing AIDE Procedure Update your baseline AIDE database: The aide --update command creates the /var/lib/aide/aide.db.new.gz database file. To start using the updated database for integrity checks, remove the .new substring from the file name. 7.4. File-integrity tools: AIDE and IMA Red Hat Enterprise Linux provides several tools for checking and preserving the integrity of files and directories on your system. The following table helps you decide which tool better fits your scenario. Table 7.1. Comparison between AIDE and IMA Question Advanced Intrusion Detection Environment (AIDE) Integrity Measurement Architecture (IMA) What AIDE is a utility that creates a database of files and directories on the system. This database serves for checking file integrity and detect intrusion detection. IMA detects if a file is altered by checking file measurement (hash values) compared to previously stored extended attributes. How AIDE uses rules to compare the integrity state of the files and directories. IMA uses file hash values to detect the intrusion. Why Detection - AIDE detects if a file is modified by verifying the rules. Detection and Prevention - IMA detects and prevents an attack by replacing the extended attribute of a file. Usage AIDE detects a threat when the file or directory is modified. IMA detects a threat when someone tries to alter the entire file. Extension AIDE checks the integrity of files and directories on the local system. IMA ensures security on the local and remote systems. 7.5. Additional resources aide(1) man page on your system Kernel integrity subsystem
|
[
"yum install aide",
"aide --init Start timestamp: 2024-07-08 10:39:23 -0400 (AIDE 0.16) AIDE initialized database at /var/lib/aide/aide.db.new.gz Number of entries: 55856 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db.new.gz ... SHA512 : mZaWoGzL2m6ZcyyZ/AXTIowliEXWSZqx IFYImY4f7id4u+Bq8WeuSE2jasZur/A4 FPBFaBkoCFHdoE/FW/V94Q==",
"mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz",
"aide --check Start timestamp: 2024-07-08 10:43:46 -0400 (AIDE 0.16) AIDE found differences between database and filesystem!! Summary: Total number of entries: 55856 Added entries: 0 Removed entries: 0 Changed entries: 1 --------------------------------------------------- Changed entries: --------------------------------------------------- f ... ..S : /root/.viminfo --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /root/.viminfo SELinux : system_u:object_r:admin_home_t:s | unconfined_u:object_r:admin_home 0 | _t:s0 ...",
"05 4 * * * root /usr/sbin/aide --check",
"aide --update"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/security_hardening/checking-integrity-with-aide_security-hardening
|
Server Guide
|
Server Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/index
|
Chapter 3. Organizations, Locations, and Life Cycle Environments
|
Chapter 3. Organizations, Locations, and Life Cycle Environments Red Hat Satellite takes a consolidated approach to Organization and Location management. System administrators define multiple Organizations and multiple Locations in a single Satellite Server. For example, a company might have three Organizations (Finance, Marketing, and Sales) across three countries (United States, United Kingdom, and Japan). In this example, Satellite Server manages all Organizations across all geographical Locations, creating nine distinct contexts for managing systems. In addition, users can define specific locations and nest them to create a hierarchy. For example, Satellite administrators might divide the United States into specific cities, such as Boston, Phoenix, or San Francisco. Figure 3.1. Example Topology for Red Hat Satellite Satellite Server defines all locations and organizations. Each respective Satellite Capsule Server synchronizes content and handles configuration of systems in a different location. The main Satellite Server retains the management function, while the content and configuration is synchronized between the main Satellite Server and a Satellite Capsule Server assigned to certain locations. 3.1. Organizations Organizations divide Red Hat Satellite resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through Red Hat Satellite, then divide and assign your subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. 3.2. Locations Locations divide organizations into logical groups based on geographical location. Each location is created and used by a single account, although each account can manage multiple locations and organizations. 3.3. Life Cycle Environments Application life cycles are divided into life cycle environments which represent each stage of the application life cycle. Life cycle environments are linked to form an environment path . You can promote content along the environment path to the life cycle environment when required. For example, if development ends on a particular version of an application, you can promote this version to the testing environment and start development on the version. Figure 3.2. An Environment Path Containing Four Environments
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-architecture_guide-org_loc_and_life_cycle_environments
|
Chapter 9. Configuring Routes
|
Chapter 9. Configuring Routes 9.1. Route configuration 9.1.1. Creating an HTTP-based route Create a route to host your application at a public URL. The route can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift Verification To verify that the route resource that you created, run the following command: USD oc get routes -o yaml <name of resource> 1 1 In this example, the route is named hello-openshift . Sample YAML definition of the created unsecured route apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 The host field is an alias DNS record that points to the service. This field can be any valid DNS name, such as www.example.com . The DNS name must follow DNS952 subdomain conventions. If not specified, a route name is automatically generated. 2 The targetPort field is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 9.1.2. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 9.1.3. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 9.1.3.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ). Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command. To properly run the command, ensure that the semicolon ( ; ) in the haproxy.router.openshift.io/hsts_header route annotation is also surrounded by double quotation marks ( "" ). Example annotate command that sets the maximum age to 31536000 ms (approximetly 8.5 hours) USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header=max-age=31536000;\ includeSubDomains;preload" Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 # ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" # ... 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. Additional resources Enabling HTTP/2 Ingress connectivity 9.1.3.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ). Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the following command: USD oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 9.1.4. Using cookies to keep route statefulness Red Hat OpenShift Service on AWS provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. Red Hat OpenShift Service on AWS can use cookies to configure session persistence. The ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the ingress controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 9.1.4.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. Deleting the cookie can force the request to re-choose an endpoint. The result is that if a server is overloaded, that server tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 9.1.5. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. The following table shows example routes and their accessibility: Table 9.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 9.1.6. HTTP header configuration Red Hat OpenShift Service on AWS provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together. Note You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions . 9.1.6.1. Order of precedence When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header. For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence. For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence. For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration: Example IngressController spec apiVersion: operator.openshift.io/v1 kind: IngressController # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration: Example Route spec apiVersion: route.openshift.io/v1 kind: Route # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN When both the IngressController spec and Route spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route spec value overrides the IngressController spec value. This prioritization occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end: frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN' Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations. 9.1.6.2. Special case headers The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances: Table 9.2. Special case header configuration options Header name Configurable using IngressController spec Configurable using Route spec Reason for disallowment Configurable using another method proxy No No The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration. No host No Yes When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route. No strict-transport-security No No The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation. Yes: the haproxy.router.openshift.io/hsts_header route annotation cookie and set-cookie No No The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy's session affinity and restrict HAProxy's ownership of a cookie. Yes: the haproxy.router.openshift.io/disable_cookie route annotation the haproxy.router.openshift.io/cookie_name route annotation 9.1.7. Setting or deleting HTTP request and response headers in a route You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes. For example, you might want to enable a web application to serve content in alternate locations for specific routes if that content is written in multiple languages, even if there is a default global location specified by the Ingress Controller serving the routes. The following procedure creates a route that sets the Content-Location HTTP request header so that the URL associated with the application, https://app.example.com , directs to the location https://app.example.com/lang/en-us . Directing application traffic to this location means that anyone using that specific route is accessing web content written in American English. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged into an Red Hat OpenShift Service on AWS cluster as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. Procedure Create a route definition and save it in a file called app-example-route.yaml : YAML definition of the created route with HTTP header directives apiVersion: route.openshift.io/v1 kind: Route # ... spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5 1 The list of actions you want to perform on the HTTP headers. 2 The type of header you want to change. In this case, a response header. 3 The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration . 4 The type of action being taken on the header. This field can have the value Set or Delete . 5 When setting HTTP headers, you must provide a value . The value can be a string from a list of available directives for that header, for example DENY , or it can be a dynamic value that will be interpreted using HAProxy's dynamic value syntax. In this case, the value is set to the relative location of the content. Create a route to your existing web application using the newly created route definition: USD oc -n app-example create -f app-example-route.yaml For HTTP request headers, the actions specified in the route definitions are executed after any actions performed on HTTP request headers in the Ingress Controller. This means that any values set for those request headers in a route will take precedence over the ones set in the Ingress Controller. For more information on the processing order of HTTP headers, see HTTP header configuration . 9.1.8. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create an allow list with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 9.3. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_allowlist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified. Strict : the browser sends cookies only for same-site requests. None : the browser sends cookies for both cross-site and same-site requests. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/allowlists folder. Note To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 9.4. rewrite-target examples Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar Certain special characters in haproxy.router.openshift.io/rewrite-target require special handling because they must be escaped properly. Refer to the following table to understand how these characters are handled. Table 9.5. Special character handling For character Use characters Notes # \# Avoid # because it terminates the rewrite expression % % or %% Avoid odd sequences such as %%% ' \' Avoid ' because it is ignored All other valid URL characters can be used without escaping. 9.1.9. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, Red Hat OpenShift Service on AWS generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that Red Hat OpenShift Service on AWS has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 9.1.10. Creating a route using the destination CA certificate in the Ingress annotation The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate. Prerequisites You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Procedure Create a secret for the destination CA certificate by entering the following command: USD oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path> For example: USD oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt Example output secret/dest-ca-cert created Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1 ... 1 The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route. Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: ... tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- ... Additional resources Specifying an alternative cluster domain using the appsDomain option 9.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. 9.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 9.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 9.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. 9.2.4. Creating a route with externally managed certificate Important Securing route with external certificates in TLS secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can configure Red Hat OpenShift Service on AWS routes with third-party certificate management solutions by using the .spec.tls.externalCertificate field of the route API. You can reference externally managed TLS certificates via secrets, eliminating the need for manual certificate management. Using the externally managed certificate reduces errors ensuring a smoother rollout of certificate updates, enabling the OpenShift router to serve renewed certificates promptly. Note This feature applies to both edge routes and re-encrypt routes. Prerequisites You must enable the RouteExternalCertificate feature gate. You must have the create and update permissions on the routes/custom-host . You must have a secret containing a valid certificate/key pair in PEM-encoded format of type kubernetes.io/tls , which includes both tls.key and tls.crt keys. You must place the referenced secret in the same namespace as the route you want to secure. Procedure Create a role in the same namespace as the secret to allow the router service account read access by running the following command: USD oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \ 1 --namespace=<current-namespace> 2 1 Specify the actual name of your secret. 2 Specify the namespace where both your secret and route reside. Create a rolebinding in the same namespace as the secret and bind the router service account to the newly created role by running the following command: USD oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1 1 Specify the namespace where both your secret and route reside. Create a YAML file that defines the route and specifies the secret containing your certificate using the following example. YAML definition of the secure route apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...] 1 Specify the actual name of your secret. Create a route resource by running the following command: USD oc apply -f <route.yaml> 1 1 Specify the generated YAML filename. If the secret exists and has a certificate/key pair, the router will serve the generated certificate if all prerequisites are met. Note If .spec.tls.externalCertificate is not provided, the router will use default generated certificates. You cannot provide the .spec.tls.certificate field or the .spec.tls.key field when using the .spec.tls.externalCertificate field. Additional resources For troubleshooting routes with externally managed certificates, check the Red Hat OpenShift Service on AWS router pod logs for errors, see Investigating pod issues .
|
[
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header=max-age=31536000; includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY",
"apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN",
"frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'",
"apiVersion: route.openshift.io/v1 kind: Route spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5",
"oc -n app-example create -f app-example-route.yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>",
"oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt",
"secret/dest-ca-cert created",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend",
"oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \\ 1 --namespace=<current-namespace> 2",
"oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...]",
"oc apply -f <route.yaml> 1"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/configuring-routes
|
Chapter 3. Red Hat build of OpenJDK features
|
Chapter 3. Red Hat build of OpenJDK features 3.1. New features and enhancements This section describes the new features introduced in this release. It also contains information about changes in the existing features. Note For all the other changes and security fixes, see https://mail.openjdk.java.net/pipermail/jdk-updates-dev/2021-January/004689.html . 3.1.1. Added -groupname option to keytool key pair generation command A new -groupname option has been added to the keytool -genkeypair command. Use the -groupname option to specify a named elliptic curve (EC) group when generating a key pair. For example, the following command generates an EC key pair using the secp384r1 curve: keytool -genkeypair -keyalg EC -groupname secp384r1 It is recommended that you use the -groupname option over the -keysize option, because there might be multiple curves of the same size. For more information, see JDK-8213821 . 3.1.2. Added support for X25519 and X448 in TLS The named elliptic curve groups x25519 and x448 are now available for JSSE key agreement in TLS versions 1.0 to 1.3. The curve group x25519 is the most preferred of the default enabled named groups. The default ordered list is as follows: x25519 secp256r1 secp384r1 secp521r1 x448 secp256k1 ffdhe2048 ffdhe3072 ffdhe4096 ffdhe6144 ffdhe8192 Use the system property jdk.tls.namedGroups to override the default list. For more information, see JDK-8225764 . 3.1.3. Added default native GSS-API library on Windows A native GSS-API library has been added to JDK on the Windows platform. The library is client-side only and uses the default credentials. It is activated by setting the sun.security.jgss.native system property to "true". A user can still make use of a third-party native GSS-API library instead by setting the system property sun.security.jgss.lib to its path. For more information, see JDK-8214079 . 3.1.4. Added jarsigner to preserve POSIX file permission and symlink attribute When signing a file that contains POSIX file permission or symlink attributes, jarsigner now preserves these attributes in the newly signed file but warns that these attributes are unsigned and not protected by the signature. The same warning is printed during the jarsigner -verify operation for such files. Note The jar tool does not read or write these attributes. This change is more visible to tools like unzip where these attributes are preserved. For more information, see JDK-8248263 . Revised on 2024-05-09 16:46:18 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.10/rn-openjdk11010-features
|
6.3. Remote Authentication Using GSSAPI
|
6.3. Remote Authentication Using GSSAPI In the context of Red Hat Virtualization, remote authentication refers to authentication that is handled by a remote service, not the Red Hat Virtualization Manager. Remote authentication is used for user or API connections coming to the Manager from within an AD, IdM, or RHDS domain. The Red Hat Virtualization Manager must be configured by an administrator using the engine-manage-domains tool to be a part of an RHDS, AD, or IdM domain. This requires that the Manager be provided with credentials for an account from the RHDS, AD, or IdM directory server for the domain with sufficient privileges to join a system to the domain. After domains have been added, domain users can be authenticated by the Red Hat Virtualization Manager against the directory server using a password. The Manager uses a framework called the Simple Authentication and Security Layer (SASL) which in turn uses the Generic Security Services Application Program Interface (GSSAPI) to securely verify the identity of a user, and ascertain the authorization level available to the user. Figure 6.1. GSSAPI Authentication
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/remote_authentication_using_gssapi
|
Chapter 8. Certification workflow
|
Chapter 8. Certification workflow 8.1. Adding certifications to previously certified hardware Use this process to create a new certification request for a system or component that has already completed a hardware certification process for an earlier RHEL version, or for a system or component that is being certified at the moment. Procedure Log in to the Red Hat Certification portal . Click New Certification . Select the Red Hat product, version, and platform for certification. Then, click . Select vendor, make, and name of an already certified product from the dropdown lists. Then, click . After the request is created, monitor the request for questions from the review team as they create the official test plan. 8.2. Changing features or hardware in an existing certification Apply for a supplemental certification to add hardware or features to an existing certification. You can request a supplemental certification for features that were not previously certified, either because they were not tested or their tests failed. After the additional features are certified, Red Hat will add these features to the certification catalog. Procedure Log in to the Red Hat Certification portal . Click the existing hardware certification. Note You might need to adjust the look-back period in the pop-up menu beyond the default 90 days. Click the Related Certifications tab to view existing related certifications. Click Add Related Certification . In the dialog box, select Supplemental to create a new supplemental certification. Review the confirmation screen displaying certification details. If it is correct, click Create Case , otherwise, click Back to change the certification type. The new case will be created and opened. In the new case, provide details for the test plan by either: Attaching files with information of the components to be added. Adding a comment with information about the devices to be added. Run the certification tests. You do not need to wait for the new test plan. 8.3. Creating a system pass-through certificate using existing specification file A system pass-through certification creates a copy of a certified system and lists it under a different vendor name, a different make, or a different model. Pass-through is used when a vendor sells their system to a partner who then rebrands it, or if a vendor sells two or more systems where one system is a superset of another. Procedure Log in to the Red Hat Certification portal . Click the existing hardware certification. Click Related Certification . Click Add Related Certification , and select Pass-through . Choose the appropriate product: If the product has already been created, select it. If the product is not in the list, create it as a new product. Click New Certification to create the new pass-through certification. The Red Hat certification team will review the hardware specification and publish the new system .certification. After the new certification is published, partners can refer to it as a pass-through certification. 8.3.1. Copying an existing system certification to a new entry Procedure To create the Pass through certification, go to the Red Hat certification web user interface, click the existing hardware system certification that is certified. Click the Certification Section. In the, Related Certification tab, go to the Pass through Certification section and click the New Certification button. In the Vendor field select the Vendor whose product you need to pass-through. In the Make field select the make that you need to pass through. Click the Create button. This will generate a request to create a pass through system specification and a pass through certification for the generated specification. If the original system specifications and the pass-through system specifications are identical or have no differences, no additional testing will be required. If differences are found, the Red Hat certification team will discuss with you what should be done to account for them. 8.3.2. Creating a system pass-through certificate using existing specification file Procedure Go to the Red Hat certification web user interface, click the existing hardware system certification that is certified. Click the Certification Section. In the Related Certification tab, go to the Pass through Certification section and choose the pass through specification file that has been created. This will create the second pass-through certificate using the same specification entry. 8.4. Creating and publishing a component pass-through certification A component pass-through certification essentially creates a copy of a certified component, listing it under a different vendor name, a different make, or a different model. This type of pass-through is used when a system vendor wants to include a component that has already been certified by a component vendor, when a component vendor sells their components to a third party who rebrands them, or if a vendor sells two or more components where one system is a superset of the others. Procedure Create a system certification. See Opening a new certification case by using the Red Hat Certification portal . Select the Vendor , Make and Name . Click the New Product button. This will take you to Choose the Certification Program web page. Select the Vendor and Program as Hardware. Click the button. This will take you to Define the Red Hat Hardware Certification Vendor Product web page. Fill in all the relevant details. From the drop down list of Category , select the category as Component/Peripheral . This creates the Component certification. The Red Hat certification team certifies and publishes the newly created Component certification. After the certificate is certified and published, it becomes public for other partners to refer it as a pass through component. 8.4.1. Copying an existing component certification to a new entry Procedure To copy the Component certification, go to the Red Hat Certification web user interface, click the existing hardware system certification that is certified. Click the Certification section. In the Related Certification tab, go to the Pass through Certification section and click the New Certification button. In the Vendor field select the Component Vendor whose product you need to pass-through. In the Make field select the Component Make that you need to pass through. Note Here, the Component Vendor and the Component Make are the fields that gets generated while performing Steps 1 to 4 of Creating and Publishing a Component Certification. If the original component specifications and the pass-through component specifications are identical then, no additional testing will be required. If there are differences found, the Red Hat certification team will discuss with you what should be done to account for them. 8.5. Adding missing data to the product certification To ensure accurate and complete certification information, follow this streamlined process for adding missing attributes before publishing the certification. Procedure Log in to the Red Hat Certification portal . Click the existing hardware certification. In the Certification Status section, click the question mark icon. A Completion Requirements notification banner displays the information about the missing attributes. Click one of the missing attributes, and you will be redirected to the Properties tab of that certification. Optional: Click the product under the Partner Product section and navigate to the Properties tab. On the Properties tab, enter the missing details such as Detail Description , Short Description , Partner Product Logo or Product Logo , and System Types . Note The system type option is applicable only if you have selected the product category as System . From the System Types list select one or many system types. Add marketing URLs for different system types in the Enter url field. Click Update . Verification The question mark icon is no longer visible if all the required data is present or updated. After the certification is complete and published, the updated product data, along with the system types, will be available on the Red Hat Ecosystem Catalog . Note All the fields marked with an asterisk * are required and must be completed before publishing the certification. 8.6. Certifying 64k kernel The 64k page size kernel is a useful option for large datasets on ARM platforms. It is suitable for memory-intensive workloads as it has significant gains in overall system performance, especially in large databases, HPC, and high network performance. From RHEL 9.2 onwards, ARM architecture uses the 64k page size kernel as optional and the 4k kernel as default. To certify the 64k page size kernel, you need to first complete RHEL 9 certification using the default 4k kernel and then you can conduct a second certification with the 64k kernel. After successful completion of the second certification, a Knowledgebase article will be attached to the 4k size kernel certification indicating support for the 64k page size with instructions on how to use the 64k kernel. Note You must create a supplemental certification to certify the 64k kernel. 8.7. Downloading guest images during test execution Procedure Verify if the guest images are available locally on the system. If yes, test execution will start. If the guest images are not available locally, the test will try downloading them from the pre-configured test server. If both local availability and the test server download fail, the test will establish a connection with the CWE API to obtain a pre-signed S3 URL for AWS. The guest image will then be downloaded from AWS by using the provided URL. If the download from AWS also encounters an issue, the test will use CWE API to directly stream and download the guest images. If all attempts to acquire the guest images are unsuccessful, the entire test is marked as FAIL. Note The above procedure is applicable for rhcert version 8.66 and later. If the FV image download fails during the test run, follow these steps: Download the files from the Red Hat Certification portal . After downloading the files, move them to /var/lib/libvirt/images directory on the host under test. To manually extract the files use the command tar xmvfj <tarred file name> . After the file is extracted, rename it by using the command mv <extracted file> <image file name> . For example - mv hwcertData-20211116.img hwcertData.img . Refer to the following table for the file names: Tarred file name Image file name hwcertData.img.tar.bz2 hwcertData.img hwcert-x86_64.img.tar.bz2 hwcert-x86_64.img rhel-kvm-rt-image.qcow2.tar.bz2 rhel-kvm-rt-image.qcow2
| null |
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly-certification-workflow_hw-test-suite-configure-hosts-run-tests-use-cli
|
Providing feedback on Red Hat Directory Server
|
Providing feedback on Red Hat Directory Server We appreciate your input on our documentation and products. Please let us know how we could make it better. To do so: For submitting feedback on the Red Hat Directory Server documentation through Jira (account required): Go to the Red Hat Issue Tracker . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. For submitting feedback on the Red Hat Directory Server product through Jira (account required): Go to the Red Hat Issue Tracker . On the Create Issue page, click . Fill in the Summary field. Select the component in the Component field. Fill in the Description field including: The version number of the selected component. Steps to reproduce the problem or your suggestion for improvement. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/proc_providing-feedback-on-red-hat-documentation_configuring-directory-databases
|
6.6 Technical Notes
|
6.6 Technical Notes Red Hat Enterprise Linux 6 Detailed notes on the changes implemented in Red Hat Enterprise Linux 6.6 Edition 6 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/index
|
Appendix A. Troubleshooting a Self-hosted Engine Deployment
|
Appendix A. Troubleshooting a Self-hosted Engine Deployment To confirm whether the self-hosted engine has already been deployed, run hosted-engine --check-deployed . An error will only be displayed if the self-hosted engine has not been deployed. A.1. Troubleshooting the Manager Virtual Machine Check the status of the Manager virtual machine by running hosted-engine --vm-status . Note Any changes made to the Manager virtual machine will take about 20 seconds before they are reflected in the status command output. Depending on the Engine status in the output, see the following suggestions to find or fix the issue. Engine status: "health": "good", "vm": "up" "detail": "up" If the Manager virtual machine is up and running as normal, you will see the following output: If the output is normal but you cannot connect to the Manager, check the network connection. Engine status: "reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "up" If the health is bad and the vm is up , the HA services will try to restart the Manager virtual machine to get the Manager back. If it does not succeed within a few minutes, enable the global maintenance mode from the command line so that the hosts are no longer managed by the HA services. Connect to the console. When prompted, enter the operating system's root password. For more console options, see How to access Hosted Engine VM console from RHEV-H host? . Ensure that the Manager virtual machine's operating system is running by logging in. Check the status of the ovirt-engine service: Check the following logs: /var/log/messages , /var/log/ovirt-engine/engine.log, and /var/log/ovirt-engine/server.log . After fixing the issue, reboot the Manager virtual machine manually from one of the self-hosted engine nodes: Note When the self-hosted engine nodes are in global maintenance mode, the Manager virtual machine must be rebooted manually. If you try to reboot the Manager virtual machine by sending a reboot command from the command line, the Manager virtual machine will remain powered off. This is by design. On the Manager virtual machine, verify that the ovirt-engine service is up and running: After ensuring the Manager virtual machine is up and running, close the console session and disable the maintenance mode to enable the HA services again: Engine status: "vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host" Note This message is expected on a host that is not currently running the Manager virtual machine. If you have more than one host in your environment, ensure that another host is not currently trying to restart the Manager virtual machine. Ensure that you are not in global maintenance mode. Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Try to reboot the Manager virtual machine manually from one of the self-hosted engine nodes: Engine status: "vm": "unknown", "health": "unknown", "detail": "unknown", "reason": "failed to getVmStats" This status means that ovirt-ha-agent failed to get the virtual machine's details from VDSM. Check the VDSM logs in /var/log/vdsm/vdsm.log . Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Engine status: The self-hosted engine's configuration has not been retrieved from shared storage If you receive the status The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable there is an issue with the ovirt-ha-agent service, or with the storage, or both. Check the status of ovirt-ha-agent on the host: If the ovirt-ha-agent is down, restart it: Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Check that you can ping the shared storage. Check whether the shared storage is mounted. Additional Troubleshooting Commands Important Contact the Red Hat Support Team if you feel you need to run any of these commands to troubleshoot your self-hosted engine environment. hosted-engine --reinitialize-lockspace : This command is used when the sanlock lockspace is broken. Ensure that the global maintenance mode is enabled and that the Manager virtual machine is stopped before reinitializing the sanlock lockspaces. hosted-engine --clean-metadata : Remove the metadata for a host's agent from the global status database. This makes all other hosts forget about this host. Ensure that the target host is down and that the global maintenance mode is enabled. hosted-engine --check-liveliness : This command checks the liveliness page of the ovirt-engine service. You can also check by connecting to https:// engine-fqdn /ovirt-engine/services/health/ in a web browser. hosted-engine --connect-storage : This command instructs VDSM to prepare all storage connections needed for the host and the Manager virtual machine. This is normally run in the back-end during the self-hosted engine deployment. Ensure that the global maintenance mode is enabled if you need to run this command to troubleshoot storage issues. A.2. Cleaning Up a Failed Self-hosted Engine Deployment If a self-hosted engine deployment was interrupted, subsequent deployments will fail with an error message. The error will differ depending on the stage in which the deployment failed. If you receive an error message, you can run the cleanup script on the deployment host to clean up the failed deployment. However, it's best to reinstall your base operating system and start the deployment from the beginning. Note The cleanup script has the following limitations: A disruption in the network connection while the script is running might cause the script to fail to remove the management bridge or to recreate a working network configuration. The script is not designed to clean up any shared storage device used during a failed deployment. You need to clean the shared storage device before you can reuse it in a subsequent deployment. Procedure Run /usr/sbin/ovirt-hosted-engine-cleanup and select y to remove anything left over from the failed self-hosted engine deployment. Define whether to reinstall on the same shared storage device or select a different shared storage device. To deploy the installation on the same storage domain, clean up the storage domain by running the following command in the appropriate directory on the server for NFS, Gluster, PosixFS or local storage domains: # rm -rf storage_location /* For iSCSI or Fibre Channel Protocol (FCP) storage, see How to Clean Up a Failed Self-hosted Engine Deployment? for information on how to clean up the storage. Reboot the self-hosted engine host or select a different shared storage device. Note The reboot is needed to make sure all the connections to the storage are cleaned before the attempt. Redeploy the self-hosted engine.
|
[
"--== Host 1 status ==-- Status up-to-date : True Hostname : hypervisor.example.com Host ID : 1 Engine status : {\"health\": \"good\", \"vm\": \"up\", \"detail\": \"up\"} Score : 3400 stopped : False Local maintenance : False crc32 : 99e57eba Host timestamp : 248542",
"hosted-engine --set-maintenance --mode=global",
"hosted-engine --console",
"systemctl status -l ovirt-engine journalctl -u ovirt-engine",
"hosted-engine --vm-shutdown hosted-engine --vm-start",
"systemctl status ovirt-engine.service",
"hosted-engine --set-maintenance --mode=none",
"hosted-engine --vm-shutdown hosted-engine --vm-start",
"systemctl status -l ovirt-ha-agent journalctl -u ovirt-ha-agent",
"systemctl start ovirt-ha-agent",
"/usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n]",
"rm -rf storage_location /*"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/troubleshooting_she_she_cli_deploy
|
Chapter 5. Examples and best practices
|
Chapter 5. Examples and best practices 5.1. Testing the environment Perform the following steps to check if everything is working as expected. Procedure Execute a takeover Change the score of the master nodes to do a failover. In this example, the SAPHana clone resource is rsc_SAPHana_HDB_HDB00-clone , and saphdb3 is one node in the second availability zone: This constraint should be removed again with: Otherwise, pacemaker tries to start HANA on SAPHDB1 . Fence a node You can fence a node with the command: Depending on the other fencing options and the infrastructure used, this node will stay down or come back. kill HANA You can also kill the database to check if the SAP resource agent is working. As sidadm , you can call: Pacemaker detects this issue and resolves it with a solution. 5.2. Useful aliases 5.2.1. Aliases for user root These aliases are added to −/.bashrc : 5.2.2. Aliases for the SIDadm user These aliases are added to ~/.customer.sh : 5.3. Monitoring failover example There are many ways to force a takeover. This example forces a takeover without shutting off a node. The SAP resource agents work with scores to decide which node will promote the SAPHana clone resource. The current status is seen using this command: In this example, the SAPHana clone resource is promoted on saphdb1 . So the primary database runs on saphdb1 . The score of this node is 150 and you can adjust the score of the secondary saphdb3 to force pacemaker to takeover the database to the secondary.
|
[
"pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3",
"pcs constraint remove rsc_SAPHana_HDB_HDB00",
"pcs stonith fence <nodename>",
"sidadm% HDB kill",
"export ListInstances=USD(/usr/sap/hostctrl/exe/saphostctrl -function ListInstances| head -1 ) export sid=USD(echo \"USDListInstances\" |cut -d \" \" -f 5| tr [A-Z] [a-z]) export SID=USD(echo USDsid | tr [a-z] [A-Z]) export Instance=USD(echo \"USDListInstances\" |cut -d \" \" -f 7 ) alias crmm='watch -n 1 crm_mon -1Arf' alias crmv='watch -n 1 /usr/local/bin/crmmv' alias clean=/usr/local/bin/cleanup alias cglo='su - USD{sid}adm -c cglo' alias cdh='cd /usr/lib/ocf/resource.d/heartbeat' alias vhdbinfo=\"vim /usr/sap/USD{SID}/home/hdbinfo;dcp /usr/sap/USD{SID}/home/hdbinfo\" alias gtr='su - USD{sid}adm -c gtr' alias hdb='su - USD{sid}adm -c hdb' alias hdbi='su - USD{sid}adm -c hdbi' alias hgrep='history | grep USD1' alias hri='su - USD{sid}adm -c hri' alias hris='su - USD{sid}adm -c hris' alias killnode=\"echo 'b' > /proc/sysrq-trigger\" alias lhc='su - USD{sid}adm -c lhc' alias python='/usr/sap/USD{SID}/HDBUSD{Instance}/exe/Python/bin/python' alias pss=\"watch 'pcs status --full | egrep -e Node\\|master\\|clone_state\\|roles'\" alias srstate='su - USD{sid}adm -c srstate' alias shr='watch -n 5 \"SAPHanaSR-monitor --sid=USD{SID}\"' alias sgsi='su - USD{sid}adm -c sgsi' alias spl='su - USD{sid}adm -c spl' alias srs='su - USD{sid}adm -c srs' alias sapstart='su - USD{sid}adm -c sapstart' alias sapstop='su - USD{sid}adm -c sapstop' alias sapmode='df -h /;su - USD{sid}adm -c sapmode' alias smm='pcs property set maintenance-mode=true' alias usmm='pcs property set maintenance-mode=false' alias tma='tmux attach -t 0:' alias tmkill='tmux killw -a' alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Setting master-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register\\ *|WAITING4LPA\\|EXCLUDE as posible takeover node|SAPHanaSR|failed|USD{HOSTNAME} |PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmss='tail -1000f /var/log/messages | grep -v systemd | egrep -s \"secondary with sync status|Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance} |sr_register|WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR |failed|USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmm='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|W aitforStopped |FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"| grep -v systemd' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USD{SID}_HDBUSD{Instance}|sr_register|WAITING4LPA |PROMOTED|DEMOTED|UNDEFINED|ERROR|Warning|mast er_walk|SWAIT |WaitforStopped|FAILED|LPT|SOK|SFAIL|SAPHanaSR-mon\"' alias vih='vim /usr/lib/ocf/resource.d/heartbeat/SAPHanaStart' alias switch1='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb1' alias switch3='pcs constraint location rsc_SAPHana_HDB_HDB00-clone rule role=master score=100 \\#uname eq saphdb3' alias switch0='pcs constraint remove location-rsc_SAPHana_HDB_HDB00-clone alias switchl='pcs constraint location | grep pcs resource | grep promotable | awk \"{ print USD4 }\"` | grep Constraint| awk \"{ print USDNF }\"' alias scl='pcs constraint location |grep \" Constraint\"'",
"alias tm='tail -100f /var/log/messages |grep -v systemd' alias tms='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|EXCLUDE as posible takeover node|SAPHanaSR|failed |USD{HOSTNAME}|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED\"' alias tmsl='tail -1000f /var/log/messages | egrep -s \"Settingmaster-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register |WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT\"' alias sapstart='sapcontrol -nr USD{TINSTANCE} -function StartSystem HDB;hdbi' alias sapstop='sapcontrol -nr USD{TINSTANCE} -function StopSystem HDB;hdbi' alias sapmode='watch -n 5 \"hdbnsutil -sr_state --sapcontrol=1 |grep site.\\*Mode\"' alias sapprim='hdbnsutil -sr_stateConfiguration| grep -i primary' alias sgsi='watch sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList' alias spl='watch sapcontrol -nr USD{TINSTANCE} -function GetProcessList' alias splh='watch \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | grep hdbdaemon\"' alias srs=\"watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py * *; echo Status \\USD?'\" alias cdb=\"cd /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/backup\" alias srstate='watch -n 10 hdbnsutil -sr_state' alias hdb='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver \"' alias hdbi='watch -n 5 \"sapcontrol -nr USD{TINSTANCE} -function GetProcessList | egrep -s hdbdaemon\\|hdbnameserver\\|hdbindexserver ;sapcontrol -nr USD{TINSTANCE} -function GetSystemInstanceList \"' alias hgrep='history | grep USD1' alias vglo=\"vim /usr/sap/USDSAPSYSTEMNAME/SYS/global/hdb/custom/config/global.ini\" alias vgloh=\"vim /hana/shared/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/USD{HOSTNAME}/global.ini\" alias hri='hdbcons -e hdbindexserver \"replication info\"' alias hris='hdbcons -e hdbindexserver \"replication info\" | egrep -e \"SiteID|ReplicationStatus_\"' alias gtr='watch -n 10 /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/getTakeoverRecommendation.py --sapcontrol=1' alias lhc='/usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/Python/bin/python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/landscapeHostConfiguration.py ;echo USD?' alias reg1='hdbnsutil -sr_register --remoteHost=hana07 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC1 --operationMode=logreplay --online' alias reg2='hdbnsutil -sr_register --remoteHost=hana08 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC2 --operationMode=logreplay --online' alias reg3='hdbnsutil -sr_register --remoteHost=hana09 -remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --name=DC3 --remoteName=DC3 --operationMode=logreplay --online' PS1=\"\\[\\033[m\\][\\[\\e[1;33m\\]\\u\\[\\e[1;33m\\]\\[\\033[m\\]@\\[\\e[1;36m\\]\\h\\[\\033[m\\]: \\[\\e[0m\\]\\[\\e[1;32m\\]\\W\\[\\e[0m\\]]# \"",
"alias pss='pcs status --full | egrep -e \"Node|master|clone_state|roles\"' [root@saphdb2:~]# pss Node List: Node Attributes: * Node: saphdb1 (1): * hana_hdb_clone_state : PROMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 150 * Node: saphdb2 (2): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -10000 * Node: saphdb3 (3): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : master1:master:worker:master * master-rsc_SAPHana_HDB_HDB00 : 100 * Node: saphdb4 (4): * hana_hdb_clone_state : DEMOTED * hana_hdb_roles : slave:slave:worker:slave * master-rsc_SAPHana_HDB_HDB00 : -12200"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/asmb_ex_automating-sap-hana-scale-out-v9
|
Chapter 87. Build schema reference
|
Chapter 87. Build schema reference Used in: KafkaConnectSpec Full list of Build schema properties Configures additional connectors for Kafka Connect deployments. 87.1. output To build new container images with additional connector plugins, Streams for Apache Kafka requires a container registry where the images can be pushed to, stored, and pulled from. Streams for Apache Kafka does not run its own container registry, so a registry must be provided. Streams for Apache Kafka supports private container registries as well as public registries such as Quay or Docker Hub . The container registry is configured in the .spec.build.output section of the KafkaConnect custom resource. The output configuration, which is required, supports two types: docker and imagestream . Using Docker registry To use a Docker registry, you have to specify the type as docker , and the image field with the full name of the new container image. The full name must include: The address of the registry Port number (if listening on a non-standard port) The tag of the new container image Example valid container image names: docker.io/my-org/my-image/my-tag quay.io/my-org/my-image/my-tag image-registry.image-registry.svc:5000/myproject/kafka-connect-build:latest Each Kafka Connect deployment must use a separate image, which can mean different tags at the most basic level. If the registry requires authentication, use the pushSecret to set a name of the Secret with the registry credentials. For the Secret, use the kubernetes.io/dockerconfigjson type and a .dockerconfigjson file to contain the Docker credentials. For more information on pulling an image from a private registry, see Create a Secret based on existing Docker credentials . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #... 1 (Required) Type of output used by Streams for Apache Kafka. 2 (Required) Full name of the image used, including the repository and tag. 3 (Optional) Name of the secret with the container registry credentials. Using OpenShift ImageStream Instead of Docker, you can use OpenShift ImageStream to store a new container image. The ImageStream has to be created manually before deploying Kafka Connect. To use ImageStream, set the type to imagestream , and use the image property to specify the name of the ImageStream and the tag used. For example, my-connect-image-stream:latest . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: imagestream 1 image: my-connect-build:latest 2 #... 1 (Required) Type of output used by Streams for Apache Kafka. 2 (Required) Name of the ImageStream and tag. 87.2. plugins Connector plugins are a set of files that define the implementation required to connect to certain types of external system. The connector plugins required for a container image must be configured using the .spec.build.plugins property of the KafkaConnect custom resource. Each connector plugin must have a name which is unique within the Kafka Connect deployment. Additionally, the plugin artifacts must be listed. These artifacts are downloaded by Streams for Apache Kafka, added to the new container image, and used in the Kafka Connect deployment. The connector plugin artifacts can also include additional components, such as (de)serializers. Each connector plugin is downloaded into a separate directory so that the different connectors and their dependencies are properly sandboxed . Each plugin must be configured with at least one artifact . Example plugins configuration with two connector plugins apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: 1 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #... 1 (Required) List of connector plugins and their artifacts. Streams for Apache Kafka supports the following types of artifacts: JAR files, which are downloaded and used directly TGZ archives, which are downloaded and unpacked ZIP archives, which are downloaded and unpacked Maven artifacts, which uses Maven coordinates Other artifacts, which are downloaded and used directly Important Streams for Apache Kafka does not perform any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually, and configure the checksum verification to make sure the same artifact is used in the automated build and in the Kafka Connect deployment. Using JAR artifacts JAR artifacts represent a JAR file that is downloaded and added to a container image. To use a JAR artifacts, set the type property to jar , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum of the artifact while building the new container image. Example JAR artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using TGZ artifacts TGZ artifacts are used to download TAR archives that have been compressed using Gzip compression. The TGZ artifact can contain the whole Kafka Connect connector, even when comprising multiple different files. The TGZ artifact is automatically downloaded and unpacked by Streams for Apache Kafka while building the new container image. To use TGZ artifacts, set the type property to tgz , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum before unpacking it and building the new container image. Example TGZ artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.tgz 2 sha512sum: 158...jg10 3 #... 1 (Required) Type of artifact. 2 (Required) URL from which the archive is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using ZIP artifacts ZIP artifacts are used to download ZIP compressed archives. Use ZIP artifacts in the same way as the TGZ artifacts described in the section. The only difference is you specify type: zip instead of type: tgz . Using Maven artifacts maven artifacts are used to specify connector plugin artifacts as Maven coordinates. The Maven coordinates identify plugin artifacts and dependencies so that they can be located and fetched from a Maven repository. Note The Maven repository must be accessible for the connector build process to add the artifacts to the container image. Example Maven artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: maven 1 repository: https://mvnrepository.com 2 group: <maven_group> 3 artifact: <maven_artifact> 4 version: <maven_version_number> 5 #... 1 (Required) Type of artifact. 2 (Optional) Maven repository to download the artifacts from. If you do not specify a repository, Maven Central repository is used by default. 3 (Required) Maven group ID. 4 (Required) Maven artifact type. 5 (Required) Maven version number. Using other artifacts other artifacts represent any kind of file that is downloaded and added to a container image. If you want to use a specific name for the artifact in the resulting container image, use the fileName field. If a file name is not specified, the file is named based on the URL hash. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum of the artifact while building the new container image. Example other artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. 4 (Optional) The name under which the file is stored in the resulting container image. 87.3. Build schema properties Property Property type Description output DockerOutput , ImageStreamOutput Configures where should the newly built image be stored. Required. resources ResourceRequirements CPU and memory resources to reserve for the build. plugins Plugin array List of connector plugins which should be added to the Kafka Connect. Required.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: imagestream 1 image: my-connect-build:latest 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: 1 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.tgz 2 sha512sum: 158...jg10 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: maven 1 repository: https://mvnrepository.com 2 group: <maven_group> 3 artifact: <maven_artifact> 4 version: <maven_version_number> 5 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-Build-reference
|
Chapter 7. Advisories related to this release
|
Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHSA-2023:7625 RHSA-2023:7626
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_2_release_notes/errata
|
Chapter 9. Object Bucket Claim
|
Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. You can add more lines to the YAML file to automate the use of the OBC. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . Example output: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: Example output: Run the following command to view the YAML file for the new OBC: Example output: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: Example output: The secret gives you the S3 access credentials. Run the following command to view the configuration map: Example output: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Bucket Claims Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page: Additional Resources Chapter 9, Object Bucket Claim 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . Additional Resources Chapter 9, Object Bucket Claim 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Buckets . Alternatively, you can also navigate to the details page of a specific OBC and click the Resource link to view the object buckets for that OBC. Select the object bucket you want to see details for. You are navigated to the Object Bucket Details page. Additional Resources Chapter 9, Object Bucket Claim 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Additional Resources Chapter 9, Object Bucket Claim
|
[
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"Example output: apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/object-bucket-claim
|
Chapter 2. Configuring the OpenShift Container Platform TLS component for builds
|
Chapter 2. Configuring the OpenShift Container Platform TLS component for builds The tls component of the QuayRegistry custom resource definition (CRD) allows you to control whether SSL/TLS are managed by the Red Hat Quay Operator, or self managed. In its current state, Red Hat Quay does not support the builds feature, or the builder workers, when the tls component is managed by the Red Hat Quay Operator. When setting the tls component to unmanaged , you must supply your own ssl.cert and ssl.key files. Additionally, if you want your cluster to support builders , or the worker nodes that are responsible for building images, you must add both the Quay route and the builder route name to the SAN list in the certificate. Alternatively, however, you could use a wildcard. The following procedure shows you how to add the builder route. Prerequisites You have set the tls component to unmanaged and uploaded custom SSL/TLS certificates to the Red Hat Quay Operator. For more information, see SSL and TLS for Red Hat Quay . Procedure In the configuration file that defines your SSL/TLS certificate parameters, for example, openssl.cnf , add the following information to the certificate's Subject Alternative Name (SAN) field. For example: # ... [alt_names] <quayregistry-name>-quay-builder-<namespace>.<domain-name>:443 # ... For example: # ... [alt_names] example-registry-quay-builder-quay-enterprise.apps.cluster-new.gcp.quaydev.org:443 # ...
|
[
"[alt_names] <quayregistry-name>-quay-builder-<namespace>.<domain-name>:443",
"[alt_names] example-registry-quay-builder-quay-enterprise.apps.cluster-new.gcp.quaydev.org:443"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/builders_and_image_automation/configuring-openshift-tls-component-builds
|
Chapter 159. KafkaNodePool schema reference
|
Chapter 159. KafkaNodePool schema reference Property Property type Description spec KafkaNodePoolSpec The specification of the KafkaNodePool. status KafkaNodePoolStatus The status of the KafkaNodePool.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkanodepool-reference
|
Chapter 6. Role [authorization.openshift.io/v1]
|
Chapter 6. Role [authorization.openshift.io/v1] Description Role is a logical grouping of PolicyRules that can be referenced as a unit by RoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta rules array Rules holds all the PolicyRules for this Role rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 6.1.1. .rules Description Rules holds all the PolicyRules for this Role Type array 6.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/roles GET : list objects of kind Role /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles GET : list objects of kind Role POST : create a Role /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles/{name} DELETE : delete a Role GET : read the specified Role PATCH : partially update the specified Role PUT : replace the specified Role 6.2.1. /apis/authorization.openshift.io/v1/roles Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Role Table 6.2. HTTP responses HTTP code Reponse body 200 - OK RoleList schema 401 - Unauthorized Empty 6.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind Role Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK RoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Role Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.8. Body parameters Parameter Type Description body Role schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 202 - Accepted Role schema 401 - Unauthorized Empty 6.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/roles/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the Role namespace string object name and auth scope, such as for teams and projects Table 6.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Role Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.13. Body parameters Parameter Type Description body DeleteOptions schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Role Table 6.15. HTTP responses HTTP code Reponse body 200 - OK Role schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Role Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Role Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Role schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Role schema 201 - Created Role schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/role_apis/role-authorization-openshift-io-v1
|
Appendix A. Google Cloud Storage configuration
|
Appendix A. Google Cloud Storage configuration To configure the Block Storage service (cinder) to use Google Cloud Storage as a backup back end, complete the following procedures: Create and download the service account credentials of your Google account: Section A.1, "Creating the GCS credentials file" Section A.2, "Creating cinder-backup-gcs.yaml " Create an environment file to map the Block Storage settings that you require: Section A.3, "Creating the environment file with your Google Cloud settings" Re-deploy the overcloud with the environment file that you created: Section A.4, "Deploying the overcloud" Important Cinder backup with Google Cloud Storage is being deprecated and support will be removed in the major release. Prerequisites You have the username and password of an account with elevated privileges. You can use the stack user account that is created to deploy the overcloud. For more information, see the Director Installation and Usage guide. You have a Google account with access to Google Cloud Platform. The Block Storage service uses this account to access and use Google Cloud to store backups. A.1. Creating the GCS credentials file The Block Storage service (cinder) requires your Google credentials to access and use Google Cloud for backups. You can provide these credentials to the Block Storage service by creating a service account key. Procedure Log in to the Google developer console ( http://console.developers.google.com ) with your Google account. Click the Credentials tab and select Service account key from the Create credentials drop-down menu. In the Create service account key screen, select the service account that you want the Block Storage service to use from the Service account drop-down menu. In the same screen, select JSON from the Key type section and click Create . The browser will then download the key to its default download location. Open the file and note the value of the project_id parameter: Save a copy of the GCS JSON credentials to /home/stack/templates/Cloud-Backup.json . Important Name the file Cloud-Backup.json and do not change the file name. This JSON file must be in the same directory location as the cinder-backup-gcs.yaml file that you create as part of the procedure in Section A.2, "Creating cinder-backup-gcs.yaml " . A.2. Creating cinder-backup-gcs.yaml Using the example file provided, create the cinder-backup-gcs.yaml file. Note The white space and format used in this the example (and in your file) are critical. If the white space is changed, then the file might not function as expected. Procedure Copy the text below, paste it into the new file. Do not make any modifications to the file contents. Save the file as /home/stack/templates/cinder-backup-gcs.yaml . A.3. Creating the environment file with your Google Cloud settings Create the environment file to contain the settings that you want to apply to the Block Storage service (cinder). In this case, the environment file configures the Block Storage service to store volume backups to Google Cloud. For more information about environment files, see the Director Installation and Usage guide. Use the following example environment file and update the backup_gcs_project_id with the project ID that is listed in the Cloud-Backup.json file. You can also change the backup_gcs_bucket_location location from US to location that is closer to your location. For a list of configuration options for the Google Cloud Backup Storage backup back end, see Table A.1, "Google Cloud Storage backup back end configuration options" . Procedure Copy the environment file example below. Retain the white space usage. Paste the content into a new file: /home/stack/templates/cinder-backup-settings.yaml . Change the value for backup_gcs_project_id from cloud-backup-1370 to the project ID listed in the Cloud-Backup.json file. Save the file. Environment file example Define each setting in the environment file. Use Table A.1, "Google Cloud Storage backup back end configuration options" to select the available configuration options. Table A.1. Google Cloud Storage backup back end configuration options PARAM Default CONFIG Description backup_gcs_project_id Required. The project ID of the service account that you are using and that is included in the project_id of the service account key from Section A.1, "Creating the GCS credentials file" . backup_gcs_credential_file The absolute path to the service account key file that you created in Section A.1, "Creating the GCS credentials file" . backup_gcs_bucket The GCS bucket, or object storage repository, that you want to use, which might or might not exist. If you specify a non-existent bucket, the Google Cloud Storage backup driver creates one and assigns it the name that you specify here. For more information, see Buckets and Bucket name requirements . backup_gcs_bucket_location us The location of the GCS bucket. This value is used only if you specify a non-existent bucket in backup_gcs_bucket ; in which case, the Google Cloud Storage backup driver specifies this as the GCS bucket location. backup_gcs_object_size 52428800 The size, in bytes, of GCS backup objects. backup_gcs_block_size 32768 The size, in bytes, that changes are tracked for incremental backups. This value must be a multiple of the backup_gcs_object_size value. backup_gcs_user_agent gcscinder The HTTP user-agent string for the GCS API. backup_gcs_reader_chunk_size 2097152 GCS objects are downloaded in chunks of this size, in bytes. backup_gcs_writer_chunk_size 2097152 GCS objects are uploaded in chunks of this size, in bytes. To upload files as a single chunk instead, use the value -1. backup_gcs_num_retries 3 Number of retries to attempt. backup_gcs_storage_class NEARLINE Storage class of the GCS bucket. This value is used only if you specify a non-existent bucket in backup_gcs_bucket ; in which case, the Google Cloud Storage backup driver specifies this as the GCS bucket storage class. For more information, see Storage Classes . backup_gcs_retry_error_codes 429 List of GCS error codes. backup_gcs_enable_progress_timer True Boolean to enable or disable the timer for sending periodic progress notifications to the Telemetry service (ceilometer) during volume backups. This is enabled by default (True). Warning When you create new buckets, Google Cloud Storage charges based on the storage class that you choose ( backup_gcs_storage_class ). The default NEARLINE class is appropriate for backup services. Warning You cannot edit the location or class of a bucket after you create it. For more information, see Managing a bucket's storage class or location . A.4. Deploying the overcloud When you have created the environment file file in /home/stack/templates/ , deploy the overcloud then restart the cinder-backup service: Procedure Log in as the stack user. Deploy the configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. Restart the cinder-backup service after the deployment finishes. For more information, see the Including Environment Files in Overcloud Creation in the Director Installation and Usage Guide and the Environment Files section of the Advanced Overcloud Customization Guide .
|
[
"{ \"type\": \"service_account\", \"project_id\": \"*cloud-backup-1370*\",",
"heat_template_version: rocky description: > Post-deployment for configuration cinder-backup to GCS parameters: servers: type: json DeployIdentifier: type: string resources: CinderBackupGcsExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/bash GCS_FILE=/var/lib/config-data/puppet-generated/cinder/etc/cinder/Cloud-Backup.json HOSTNAME=USD(hostname -s) for NODE in USD(hiera -c /etc/puppet/hiera.yaml cinder_backup_short_node_names | tr -d '[]\",'); do if [ USDNODE == USDHOSTNAME ]; then cat <<EOF > USDGCS_FILE GCS_JSON_DATA EOF chmod 0640 USDGCS_FILE chown root:42407 USDGCS_FILE fi done params: GCS_JSON_DATA: {get_file: Cloud-Backup.json} CinderBackupGcsDeployment: type: OS::Heat::SoftwareDeploymentGroup properties: servers: {get_param: servers} config: {get_resource: CinderBackupGcsExtraConfig} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier}",
"resource_registry: OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml # For non-pcmk managed implementation # OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-container-puppet.yaml OS::TripleO::NodeExtraConfigPost: /home/stack/templates/cinder-backup-gcs.yaml parameter_defaults: CinderBackupBackend: swift ExtraConfig: cinder::backup::swift::backup_driver: cinder.backup.drivers.gcs.GoogleBackupDriver cinder::config::cinder_config: DEFAULT/backup_gcs_credential_file: value: /etc/cinder/Cloud-Backup.json DEFAULT/backup_gcs_project_id: value: cloud-backup-1370 DEFAULT/backup_gcs_bucket: value: cinder-backup-gcs DEFAULT/backup_gcs_bucket_location: value: us",
"openstack overcloud deploy --templates -e /home/stack/templates/cinder-backup-settings.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/block_storage_backup_guide/google-cloud-storage
|
Chapter 1. Preparing to install on OpenStack
|
Chapter 1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack with Kuryr : You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. Installing a cluster on OpenStack with Kuryr on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses Kuryr SDN. 1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead
|
[
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/preparing-to-install-on-openstack
|
Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1]
|
Chapter 12. ImageDigestMirrorSet [config.openshift.io/v1] Description ImageDigestMirrorSet holds cluster-wide information about how to handle registry mirror rules on using digest pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 12.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageDigestMirrors array imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. imageDigestMirrors[] object ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. 12.1.2. .spec.imageDigestMirrors Description imageDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using tag specification, users should configure a list of mirrors using "ImageTagMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagedigestmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a specific order of mirrors, should configure them into one list of mirrors using the expected order. Type array 12.1.3. .spec.imageDigestMirrors[] Description ImageDigestMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their digests. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy" Other cluster configuration, including (but not limited to) other imageDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 12.1.4. .status Description status contains the observed state of the resource. Type object 12.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagedigestmirrorsets DELETE : delete collection of ImageDigestMirrorSet GET : list objects of kind ImageDigestMirrorSet POST : create an ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} DELETE : delete an ImageDigestMirrorSet GET : read the specified ImageDigestMirrorSet PATCH : partially update the specified ImageDigestMirrorSet PUT : replace the specified ImageDigestMirrorSet /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status GET : read status of the specified ImageDigestMirrorSet PATCH : partially update status of the specified ImageDigestMirrorSet PUT : replace status of the specified ImageDigestMirrorSet 12.2.1. /apis/config.openshift.io/v1/imagedigestmirrorsets Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImageDigestMirrorSet Table 12.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageDigestMirrorSet Table 12.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.5. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageDigestMirrorSet Table 12.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.7. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.8. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 202 - Accepted ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.2. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name} Table 12.9. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet Table 12.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageDigestMirrorSet Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.12. Body parameters Parameter Type Description body DeleteOptions schema Table 12.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageDigestMirrorSet Table 12.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.15. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageDigestMirrorSet Table 12.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.17. Body parameters Parameter Type Description body Patch schema Table 12.18. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageDigestMirrorSet Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty 12.2.3. /apis/config.openshift.io/v1/imagedigestmirrorsets/{name}/status Table 12.22. Global path parameters Parameter Type Description name string name of the ImageDigestMirrorSet Table 12.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImageDigestMirrorSet Table 12.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.25. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageDigestMirrorSet Table 12.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.27. Body parameters Parameter Type Description body Patch schema Table 12.28. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageDigestMirrorSet Table 12.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.30. Body parameters Parameter Type Description body ImageDigestMirrorSet schema Table 12.31. HTTP responses HTTP code Reponse body 200 - OK ImageDigestMirrorSet schema 201 - Created ImageDigestMirrorSet schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/imagedigestmirrorset-config-openshift-io-v1
|
Chapter 3. The rustfmt formatting tool
|
Chapter 3. The rustfmt formatting tool With the rustfmt formatting tool, you can automatically format the source code of your Rust programs. You can use rusftmt either as a standalone tool or with Cargo. 3.1. Installing rustfmt Complete the following steps to install the rustfmt formatting tool. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure Run the following command to install rustfmt : On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 3.2. Using rustfmt as a standalone tool Use rustfmt as a standalone tool to format a Rust source file and all its dependencies. As an alternative, use rustfmt with the Cargo build tool. For more information, see Using rustfmt with Cargo . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To format a Rust source file using rustfmt as a standalone tool, run the following command: On Red Hat Enterprise Linux 8: Replace < source_file > with the name of your source file. Alternatively, you can replace < source_file > with standard input. rustfmt then provides its output in standard output. On Red Hat Enterprise Linux 9: Replace < source_file > with the name of your source file. Alternatively, you can replace < source_file > with standard input. rustfmt then provides its output in standard output. Note By default, rustfmt modifies the affected files without displaying details or creating backups. To display details and create backups, run rustfmt with the --write-mode value . 3.3. Using rustfmt with the Cargo build tool Use the rustfmt tool with Cargo to format a Rust source file and all its dependencies. As an alternative, use rustfmt as a standalone tool. For more information, see Using rustfmt as a standalone tool . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To format all source files in a Cargo code package, run the following command: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Note To change the rustfmt formatting options, create the configuration file rustfmt.toml in the project directory and add your configurations to the file. 3.4. Additional resources To display the help pages of rustfmt , run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To configure the rustfmt tool, create the rustfmt.toml configuration file in the project directory and add your configurations to the file. You can find the configuration options in the Configurations.md file. On Red Hat Enterprise Linux 8, you can find it under the following path: /usr/share/doc/rustfmt/Configurations.md On Red Hat Enterprise Linux 9, you can find it under the following path: /usr/share/doc/rustfmt/Configurations.md
|
[
"yum install rustfmt",
"dnf install rustfmt",
"rustfmt < source-file >",
"rustfmt < source-file >",
"cargo fmt",
"cargo fmt",
"rustfmt --help",
"rustfmt --help"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.71.1_toolset/assembly_the-rustfmt-formatting-tool
|
Chapter 6. Upgrading hosts to next major Red Hat Enterprise Linux release
|
Chapter 6. Upgrading hosts to major Red Hat Enterprise Linux release You can use a job template to upgrade your Red Hat Enterprise Linux hosts to the major release. Below upgrade paths are possible: Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 Prerequisites Ensure that your Red Hat Enterprise Linux hosts meet the requirements for the upgrade. For Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 upgrade, see Planning an upgrade in Upgrading from RHEL 7 to RHEL 8 . For Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 upgrade, see Planning an upgrade to RHEL 9 in Upgrading from RHEL 8 to RHEL 9 . Prepare your Red Hat Enterprise Linux hosts for the upgrade. For Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 upgrade, see Preparing a RHEL 7 system for the upgrade in Upgrading from RHEL 7 to RHEL 8 . For Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 upgrade, see Preparing a RHEL 8 system for the upgrade in Upgrading from RHEL 8 to RHEL 9 . Distribute Satellite SSH keys to the hosts that you want to upgrade. For more information, see Section 12.14, "Distributing SSH keys for remote execution" . Procedure On Satellite, enable the Leapp plugin: If you are using a custom job template for the Leapp pre-upgrade check, configure the leapp_preupgrade remote execution feature to point to your template: In the Satellite web UI, navigate to Administer > Remote Execution Features . Click leapp_preupgrade . In the Job Template dropdown menu, select your template. Click Submit . In the Satellite web UI, navigate to Hosts > All Hosts . Select the hosts that you want to upgrade to the major Red Hat Enterprise Linux version. From the Schedule Remote Job list, select Preupgrade check with Leapp . When the check is finished, click the Leapp preupgrade report tab to see if Leapp has found any issues on your hosts. Issues that have the Inhibitor flag are considered crucial and are likely to break the upgrade procedure. Issues that have the Has Remediation flag contain remediation that can help you fix the issue. Click an issue that is flagged as Has Remediation to expand it. If the issue contains a remediation Command , you can fix it directly from Satellite using remote execution. Select the issue. If the issue contains only a remediation Hint , use the hint to fix the issue on the host manually. Repeat this step for other issues. After you selected any issues with remediation commands, click Fix Selected and submit the job. After the issues are fixed, click Rerun , and then click Submit to run the pre-upgrade check again to verify that the hosts you are upgrading do not have any issues and are ready to be upgraded. If the pre-upgrade check verifies that the hosts do not have any issues, click Run Upgrade and click Submit to start the upgrade. Alternatively, you can upgrade your host by selecting Upgrade with Leapp in the Schedule Remote Job drop down menu.
|
[
"satellite-installer --enable-foreman-plugin-leapp"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/upgrading_hosts_to_next_major_release_managing-hosts
|
Chapter 1. Kernel
|
Chapter 1. Kernel The kernel shipped in Red Hat Enterprise Linux 6.5 includes several hundred bug fixes for, and enhancements to the Linux kernel. For details concerning important bugs fixed and enhancements added to the kernel for this release, refer to the kernel section of the Red Hat Enterprise Linux 6.5 Technical Notes . Support for PMC-Sierra Cards and Controllers The pm8001/pm80xx driver adds support for PMC-Sierra Adaptec Series 6H and 7H SAS/SATA HBA cards as well as PMC Sierra 8081, 8088, and 8089 chip based SAS/SATA controllers. Configurable Timeout for Unresponsive Devices In certain storage configurations (for example, configurations with many LUNs), the SCSI error handling code can spend a large amount of time issuing commands such as TEST UNIT READY to unresponsive storage devices. A new sysfs parameter, eh_timeout , has been added to the SCSI device object, which allows configuration of the timeout value for TEST UNIT READY and REQUEST SENSE commands used by the SCSI error handling code. This decreases the amount of time spent checking these unresponsive devices. The default value of eh_timeout is 10 seconds, which was the timeout value used prior to adding this functionality. Configuration of Maximum Time for Error Recovery A new sysfs parameter eh_deadline has been added to the SCSI host object, which enables configuring the maximum amount of time that the SCSI error handling will attempt to perform error recovery, before giving up and resetting the entire host bus adapter (HBA). The value of this parameter is specified in seconds, and the default is zero, which disables the time limit and allows all of the error recovery to take place. In addition to using sysfs , a default value can be set for all SCSI HBAs using the eh_deadline kernel parameter. Lenovo X220 Touchscreen Support Red Hat Enterprise Linux 6.5 now supports Lenovo X220 touchscreen. New Supported Compression Formats for makedumpfile In Red Hat Enterprise Linux 6.5, the makedumpfile utility supports the LZO and snappy compression formats. Using these compression formats instead of the zlib format is quicker, in particular when compressing data with randomized content.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_release_notes/kernel
|
3.10. Red Hat High Availability Add-On and SELinux
|
3.10. Red Hat High Availability Add-On and SELinux The High Availability Add-On for Red Hat Enterprise Linux 6 supports SELinux in the enforcing state with the SELinux policy type set to targeted . Note When using SELinux with the High Availability Add-On in a VM environment, you should ensure that the SELinux boolean fenced_can_network_connect is persistently set to on . This allows the fence_xvm fencing agent to work properly, enabling the system to fence virtual machines. For more information about SELinux, see Deployment Guide for Red Hat Enterprise Linux 6.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-selinux-ca
|
Chapter 25. level
|
Chapter 25. level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it doesn't recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/level
|
Chapter 2. Installing dynamic plugins using the Helm chart
|
Chapter 2. Installing dynamic plugins using the Helm chart You can deploy a Developer Hub instance using a Helm chart, which is a flexible installation method. With the Helm chart, you can sideload dynamic plugins into your Developer Hub instance without having to recompile your code or rebuild the container. To install dynamic plugins in Developer Hub using Helm, add the following global.dynamic parameters in your Helm chart: plugins : the dynamic plugins list intended for installation. By default, the list is empty. You can populate the plugins list with the following fields: package : a package specification for the dynamic plugin package that you want to install. You can use a package for either a local or an external dynamic plugin installation. For a local installation, use a path to the local folder containing the dynamic plugin. For an external installation, use a package specification from a public NPM repository. integrity (required for external packages): an integrity checksum in the form of <alg>-<digest> specific to the package. Supported algorithms include sha256 , sha384 and sha512 . pluginConfig : an optional plugin-specific app-config YAML fragment. See plugin configuration for more information. disabled : disables the dynamic plugin if set to true . Default: false . includes : a list of YAML files utilizing the same syntax. Note The plugins list in the includes file is merged with the plugins list in the main Helm values. If a plugin package is mentioned in both plugins lists, the plugins fields in the main Helm values override the plugins fields in the includes file. The default configuration includes the dynamic-plugins.default.yaml file, which contains all of the dynamic plugins preinstalled in Developer Hub, whether enabled or disabled by default. 2.1. Obtaining the integrity checksum To obtain the integrity checksum, enter the following command: 2.2. Example Helm chart configurations for dynamic plugin installations The following examples demonstrate how to configure the Helm chart for specific types of dynamic plugin installations. Configuring a local plugin and an external plugin when the external plugin requires a specific app-config global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig: ... Disabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true Enabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false Enabling a plugin that is disabled in an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false 2.3. Installing external dynamic plugins using a Helm chart The NPM registry contains external dynamic plugins that you can use for demonstration purposes. For example, the following community plugins are available in the janus-idp organization in the NPMJS repository: Notifications (frontend and backend) Kubernetes actions (scaffolder actions) To install the Notifications and Kubernetes actions plugins, include them in the Helm chart values in the global.dynamic.plugins list as shown in the following example: global: dynamic: plugins: - package: '@janus-idp/[email protected]' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA=='
|
[
"npm view <package name>@<version> dist.integrity",
"global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: plugins: - package: '@janus-idp/[email protected]' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA=='"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_and_viewing_dynamic_plugins/con-install-dynamic-plugin-helm_title-plugins-rhdh-about
|
Network Observability
|
Network Observability OpenShift Container Platform 4.13 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/index
|
Appendix A. VDSM Service and Hooks
|
Appendix A. VDSM Service and Hooks The VDSM service is used by the Red Hat Virtualization Manager to manage Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. VDSM manages and monitors the host's storage, memory, and network resources. It also coordinates virtual machine creation, statistics gathering, log collection and other host administration tasks. VDSM is run as a daemon on each host managed by Red Hat Virtualization Manager. It answers XML-RPC calls from clients. The Red Hat Virtualization Manager functions as a VDSM client. VDSM is extensible via hooks. Hooks are scripts executed on the host when key events occur. When a supported event occurs VDSM runs any executable hook scripts in /usr/libexec/vdsm/hooks/ nn_event-name / on the host in alphanumeric order. By convention each hook script is assigned a two digit number, included at the front of the file name, to ensure that the order in which the scripts will be run in is clear. You are able to create hook scripts in any programming language, Python will however be used for the examples contained in this chapter. Note that all scripts defined on the host for the event are executed. If you require that a given hook is only executed for a subset of the virtual machines which run on the host then you must ensure that the hook script itself handles this requirement by evaluating the Custom Properties associated with the virtual machine. Warning VDSM hooks can interfere with the operation of Red Hat Virtualization. A bug in a VDSM hook has the potential to cause virtual machine crashes and loss of data. VDSM hooks should be implemented with caution and tested rigorously. The Hooks API is new and subject to significant change in the future. You can extend VDSM with event-driven hooks. Extending VDSM with hooks is an experimental technology, and this chapter is intended for experienced developers. By setting custom properties on virtual machines it is possible to pass additional parameters, specific to a given virtual machine, to the hook scripts. A.1. Installing a VDSM hook By default, VDSM hooks are not installed. If you need a specific hook, you must install it manually. Prerequisites The host repository must be enabled. You are logged into the host with root permissions. Procedure Get a list of available hooks: Put the host in maintenance mode. Install the desired VDSM hook package on the host: For example, to install the vdsm-hook-vhostmd package on the host, enter the following: Restart the host. Additional resources Enabling the Red Hat Virtualization Host Repository Enabling the Red Hat Enterprise Linux host Repositories
|
[
"dnf list vdsm\\*hook\\*",
"dnf install <vdsm-hook-name>",
"dnf install vdsm-hook-vhostmd"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-vdsm_and_hooks
|
Chapter 4. Catalogs
|
Chapter 4. Catalogs 4.1. File-based catalogs Operator Lifecycle Manager (OLM) v1 in OpenShift Container Platform supports file-based catalogs for discovering and sourcing cluster extensions, including Operators, on a cluster. 4.1.1. Highlights File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade paths Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade paths. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. 4.1.2. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog's root directory. 4.1.3. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 4.1.3.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 4.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 4.1.3.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade paths for those bundles. If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 4.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects. You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 4.1.3.3. olm.bundle schema Example 4.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 4.1.3.4. olm.deprecations schema The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. When this schema is defined, the OpenShift Container Platform web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub. An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. Table 4.1. Deprecation reference types Type Scope Status condition olm.package Represents the entire package PackageDeprecated olm.channel Represents one channel ChannelDeprecated olm.bundle Represents one bundle version BundleDeprecated Each reference type has their own requirements, as detailed in the following example. Example 4.4. Example olm.deprecations schema with each reference type schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support. 1 Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field. 2 The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema. 3 All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob. 4 The name field for the olm.channel schema is required. 5 The name field for the olm.bundle schema is required. Note The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle. Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package's index.yaml file: Example directory structure for a catalog with deprecations my-catalog └── my-operator ├── index.yaml └── deprecations.yaml Additional resources Updating or filtering a file-based catalog image 4.1.4. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 4.1.4.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 4.5. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 4.1.4.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 4.6. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 4.1.4.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 4.7. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 4.1.4.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 4.8. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 4.1.5. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 4.1.6. Guidelines Consider the following guidelines when maintaining file-based catalogs. 4.1.6.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade path from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade paths: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 4.1.6.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 4.1.7. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 4.1.8. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 4.2. Red Hat-provided catalogs Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. 4.2.1. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.18 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.18 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.18 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.18 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 4.3. Managing catalogs Cluster administrators can add catalogs , or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog. You can manage catalogs and extensions declaratively from the CLI by using custom resources (CRs). File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. 4.3.1. About catalogs in OLM v1 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Additional resources File-based catalogs 4.3.2. Red Hat-provided Operator catalogs in OLM v1 Operator Lifecycle Manager (OLM) v1 includes the following Red Hat-provided Operator catalogs on the cluster by default. If you want to add an additional catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show the default catalogs installed on the cluster. Red Hat Operators catalog apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-redhat-operators spec: priority: -100 source: image: pollIntervalMinutes: <poll_interval_duration> 1 ref: registry.redhat.io/redhat/redhat-operator-index:v4.18 type: Image 1 Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field. Certified Operators catalog apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-certified-operators spec: priority: -200 source: type: image image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/certified-operator-index:v4.18 type: Image Red Hat Marketplace catalog apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-redhat-marketplace spec: priority: -300 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/redhat-marketplace-index:v4.18 type: Image Community Operators catalog apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-community-operators spec: priority: -400 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/community-operator-index:v4.18 type: Image The following command adds a catalog to your cluster: Command syntax USD oc apply -f <catalog_name>.yaml 1 1 Specifies the catalog CR, such as my-catalog.yaml . 4.3.3. Adding a catalog to a cluster To add a catalog to a cluster for Operator Lifecycle Manager (OLM) v1 usage, create a ClusterCatalog custom resource (CR) and apply it to the cluster. Procedure Create a catalog custom resource (CR), similar to the following example: Example my-redhat-operators.yaml file apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: my-redhat-operators 1 spec: priority: 1000 2 source: image: pollIntervalMinutes: 10 3 ref: registry.redhat.io/redhat/community-operator-index:v4.18 4 type: Image 1 The catalog is automatically labeled with the value of the metadata.name field when it is applied to the cluster. For more information about labels and catalog selection, see "Catalog content resolution". 2 Optional: Specify the priority of the catalog in relation to the other catalogs on the cluster. For more information, see "Catalog selection by priority". 3 Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field. 4 Specify the catalog image in the spec.source.image.ref field. Add the catalog to your cluster by running the following command: USD oc apply -f my-redhat-operators.yaml Example output clustercatalog.olm.operatorframework.io/my-redhat-operators created Verification Run the following commands to verify the status of your catalog: Check if you catalog is available by running the following command: USD oc get clustercatalog Example output NAME LASTUNPACKED SERVING AGE my-redhat-operators 55s True 64s openshift-certified-operators 83m True 84m openshift-community-operators 43m True 84m openshift-redhat-marketplace 83m True 84m openshift-redhat-operators 54m True 84m Check the status of your catalog by running the following command: USD oc describe clustercatalog my-redhat-operators Example output Name: my-redhat-operators Namespace: Labels: olm.operatorframework.io/metadata.name=my-redhat-operators Annotations: <none> API Version: olm.operatorframework.io/v1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2025-02-18T20:28:50Z Finalizers: olm.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 50248 UID: 86adf94f-d2a8-4e70-895b-31139f2eeab7 Spec: Availability Mode: Available Priority: 1000 Source: Image: Poll Interval Minutes: 10 Ref: registry.redhat.io/redhat/community-operator-index:v4.18 Type: Image Status: 1 Conditions: Last Transition Time: 2025-02-18T20:29:00Z Message: Successfully unpacked and stored content from resolved source Observed Generation: 1 Reason: Succeeded 2 Status: True Type: Progressing Last Transition Time: 2025-02-18T20:29:00Z Message: Serving desired content from resolved source Observed Generation: 1 Reason: Available Status: True Type: Serving Last Unpacked: 2025-02-18T20:28:59Z Resolved Source: Image: Ref: registry.redhat.io/redhat/community-operator-index@sha256:11627ea6fdd06b8092df815076e03cae9b7cede8b353c0b461328842d02896c5 3 Type: Image Urls: Base: https://catalogd-service.openshift-catalogd.svc/catalogs/my-redhat-operators Events: <none> 1 Describes the status of the catalog. 2 Displays the reason the catalog is in the current state. 3 Displays the image reference of the catalog. 4.3.4. Deleting a catalog You can delete a catalog by deleting its custom resource (CR). Prerequisites You have a catalog installed. Procedure Delete a catalog by running the following command: USD oc delete clustercatalog <catalog_name> Example output clustercatalog.olm.operatorframework.io "my-redhat-operators" deleted Verification Verify the catalog is deleted by running the following command: USD oc get clustercatalog 4.3.5. Disabling a default catalog You can disable the Red Hat-provided catalogs that are included with OpenShift Container Platform by default. Procedure Disable a default catalog by running the following command: USD oc patch clustercatalog openshift-certified-operators -p \ '{"spec": {"availabilityMode": "Unavailable"}}' --type=merge Example output clustercatalog.olm.operatorframework.io/openshift-certified-operators patched Verification Verify the catalog is disabled by running the following command: USD oc get clustercatalog openshift-certified-operators Example output NAME LASTUNPACKED SERVING AGE openshift-certified-operators False 6h54m 4.4. Catalog content resolution When you specify the cluster extension you want to install in a custom resource (CR), Operator Lifecycle Manager (OLM) v1 uses catalog selection to resolve what content is installed. You can perform the following actions to control the selection of catalog content: Specify labels to select the catalog. Use match expressions to perform complex filtering across catalogs. Set catalog priority. If you do not specify any catalog selection criteria, Operator Lifecycle Manager (OLM) v1 selects an extension from any available catalog on the cluster that provides the requested package. During resolution, bundles that are not deprecated are preferred over deprecated bundles by default.
|
[
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-redhat-operators spec: priority: -100 source: image: pollIntervalMinutes: <poll_interval_duration> 1 ref: registry.redhat.io/redhat/redhat-operator-index:v4.18 type: Image",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-certified-operators spec: priority: -200 source: type: image image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/certified-operator-index:v4.18 type: Image",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-redhat-marketplace spec: priority: -300 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/redhat-marketplace-index:v4.18 type: Image",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: openshift-community-operators spec: priority: -400 source: image: pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/community-operator-index:v4.18 type: Image",
"oc apply -f <catalog_name>.yaml 1",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: my-redhat-operators 1 spec: priority: 1000 2 source: image: pollIntervalMinutes: 10 3 ref: registry.redhat.io/redhat/community-operator-index:v4.18 4 type: Image",
"oc apply -f my-redhat-operators.yaml",
"clustercatalog.olm.operatorframework.io/my-redhat-operators created",
"oc get clustercatalog",
"NAME LASTUNPACKED SERVING AGE my-redhat-operators 55s True 64s openshift-certified-operators 83m True 84m openshift-community-operators 43m True 84m openshift-redhat-marketplace 83m True 84m openshift-redhat-operators 54m True 84m",
"oc describe clustercatalog my-redhat-operators",
"Name: my-redhat-operators Namespace: Labels: olm.operatorframework.io/metadata.name=my-redhat-operators Annotations: <none> API Version: olm.operatorframework.io/v1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2025-02-18T20:28:50Z Finalizers: olm.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 50248 UID: 86adf94f-d2a8-4e70-895b-31139f2eeab7 Spec: Availability Mode: Available Priority: 1000 Source: Image: Poll Interval Minutes: 10 Ref: registry.redhat.io/redhat/community-operator-index:v4.18 Type: Image Status: 1 Conditions: Last Transition Time: 2025-02-18T20:29:00Z Message: Successfully unpacked and stored content from resolved source Observed Generation: 1 Reason: Succeeded 2 Status: True Type: Progressing Last Transition Time: 2025-02-18T20:29:00Z Message: Serving desired content from resolved source Observed Generation: 1 Reason: Available Status: True Type: Serving Last Unpacked: 2025-02-18T20:28:59Z Resolved Source: Image: Ref: registry.redhat.io/redhat/community-operator-index@sha256:11627ea6fdd06b8092df815076e03cae9b7cede8b353c0b461328842d02896c5 3 Type: Image Urls: Base: https://catalogd-service.openshift-catalogd.svc/catalogs/my-redhat-operators Events: <none>",
"oc delete clustercatalog <catalog_name>",
"clustercatalog.olm.operatorframework.io \"my-redhat-operators\" deleted",
"oc get clustercatalog",
"oc patch clustercatalog openshift-certified-operators -p '{\"spec\": {\"availabilityMode\": \"Unavailable\"}}' --type=merge",
"clustercatalog.olm.operatorframework.io/openshift-certified-operators patched",
"oc get clustercatalog openshift-certified-operators",
"NAME LASTUNPACKED SERVING AGE openshift-certified-operators False 6h54m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/catalogs
|
Preface
|
Preface Providing feedback on Red Hat documentation Red Hat appreciates your feedback on product documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to help the documentation team to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information other fields at their default values. In the Reporter field, enter your Jira user name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/release_notes_for_connectivity_link_1.0/pr01
|
Chapter 8. DNS [config.openshift.io/v1]
|
Chapter 8. DNS [config.openshift.io/v1] Description DNS holds cluster-wide information about DNS. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 8.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description baseDomain string baseDomain is the base domain of the cluster. All managed DNS records will be sub-domains of this base. For example, given the base domain openshift.example.com , an API server DNS record may be created for cluster-api.openshift.example.com . Once set, this field cannot be changed. platform object platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. privateZone object privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. publicZone object publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. 8.1.2. .spec.platform Description platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required type Property Type Description aws object aws contains DNS configuration specific to the Amazon Web Services cloud provider. type string type is the underlying infrastructure provider for the cluster. Allowed values: "", "AWS". Individual components may not support all platforms, and must handle unrecognized platforms with best-effort defaults. 8.1.3. .spec.platform.aws Description aws contains DNS configuration specific to the Amazon Web Services cloud provider. Type object Property Type Description privateZoneIAMRole string privateZoneIAMRole contains the ARN of an IAM role that should be assumed when performing operations on the cluster's private hosted zone specified in the cluster DNS config. When left empty, no role should be assumed. 8.1.4. .spec.privateZone Description privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.5. .spec.publicZone Description publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object 8.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/dnses DELETE : delete collection of DNS GET : list objects of kind DNS POST : create a DNS /apis/config.openshift.io/v1/dnses/{name} DELETE : delete a DNS GET : read the specified DNS PATCH : partially update the specified DNS PUT : replace the specified DNS /apis/config.openshift.io/v1/dnses/{name}/status GET : read status of the specified DNS PATCH : partially update status of the specified DNS PUT : replace status of the specified DNS 8.2.1. /apis/config.openshift.io/v1/dnses HTTP method DELETE Description delete collection of DNS Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNS Table 8.2. HTTP responses HTTP code Reponse body 200 - OK DNSList schema 401 - Unauthorized Empty HTTP method POST Description create a DNS Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body DNS schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 202 - Accepted DNS schema 401 - Unauthorized Empty 8.2.2. /apis/config.openshift.io/v1/dnses/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the DNS HTTP method DELETE Description delete a DNS Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNS Table 8.9. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNS Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNS Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body DNS schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty 8.2.3. /apis/config.openshift.io/v1/dnses/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the DNS HTTP method GET Description read status of the specified DNS Table 8.16. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNS Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNS Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body DNS schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/dns-config-openshift-io-v1
|
Chapter 8. ConsoleSample [console.openshift.io/v1]
|
Chapter 8. ConsoleSample [console.openshift.io/v1] Description ConsoleSample is an extension to customizing OpenShift web console by adding samples. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec contains configuration for a console sample. 8.1.1. .spec Description spec contains configuration for a console sample. Type object Required abstract description source title Property Type Description abstract string abstract is a short introduction to the sample. It is required and must be no more than 100 characters in length. The abstract is shown on the sample card tile below the title and provider and is limited to three lines of content. description string description is a long form explanation of the sample. It is required and can have a maximum length of 4096 characters. It is a README.md-like content for additional information, links, pre-conditions, and other instructions. It will be rendered as Markdown so that it can contain line breaks, links, and other simple formatting. icon string icon is an optional base64 encoded image and shown beside the sample title. The format must follow the data: URL format and can have a maximum size of 10 KB . data:[<mediatype>][;base64],<base64 encoded image> For example: data:image;base64, plus the base64 encoded image. Vector images can also be used. SVG icons must start with: data:image/svg+xml;base64, plus the base64 encoded SVG image. All sample catalog icons will be shown on a white background (also when the dark theme is used). The web console ensures that different aspect ratios work correctly. Currently, the surface of the icon is at most 40x100px. For more information on the data URL format, please visit https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs . provider string provider is an optional label to honor who provides the sample. It is optional and must be no more than 50 characters in length. A provider can be a company like "Red Hat" or an organization like "CNCF" or "Knative". Currently, the provider is only shown on the sample card tile below the title with the prefix "Provided by " source object source defines where to deploy the sample service from. The sample may be sourced from an external git repository or container image. tags array (string) tags are optional string values that can be used to find samples in the samples catalog. Examples of common tags may be "Java", "Quarkus", etc. They will be displayed on the samples details page. title string title is the display name of the sample. It is required and must be no more than 50 characters in length. type string type is an optional label to group multiple samples. It is optional and must be no more than 20 characters in length. Recommendation is a singular term like "Builder Image", "Devfile" or "Serverless Function". Currently, the type is shown a badge on the sample card tile in the top right corner. 8.1.2. .spec.source Description source defines where to deploy the sample service from. The sample may be sourced from an external git repository or container image. Type object Required type Property Type Description containerImport object containerImport allows the user import a container image. gitImport object gitImport allows the user to import code from a git repository. type string type of the sample, currently supported: "GitImport";"ContainerImport" 8.1.3. .spec.source.containerImport Description containerImport allows the user import a container image. Type object Required image Property Type Description image string reference to a container image that provides a HTTP service. The service must be exposed on the default port (8080) unless otherwise configured with the port field. Supported formats: - <repository-name>/<image-name> - docker.io/<repository-name>/<image-name> - quay.io/<repository-name>/<image-name> - quay.io/<repository-name>/<image-name>@sha256:<image hash> - quay.io/<repository-name>/<image-name>:<tag> service object service contains configuration for the Service resource created for this sample. 8.1.4. .spec.source.containerImport.service Description service contains configuration for the Service resource created for this sample. Type object Property Type Description targetPort integer targetPort is the port that the service listens on for HTTP requests. This port will be used for Service and Route created for this sample. Port must be in the range 1 to 65535. Default port is 8080. 8.1.5. .spec.source.gitImport Description gitImport allows the user to import code from a git repository. Type object Required repository Property Type Description repository object repository contains the reference to the actual Git repository. service object service contains configuration for the Service resource created for this sample. 8.1.6. .spec.source.gitImport.repository Description repository contains the reference to the actual Git repository. Type object Required url Property Type Description contextDir string contextDir is used to specify a directory within the repository to build the component. Must start with / and have a maximum length of 256 characters. When omitted, the default value is to build from the root of the repository. revision string revision is the git revision at which to clone the git repository Can be used to clone a specific branch, tag or commit SHA. Must be at most 256 characters in length. When omitted the repository's default branch is used. url string url of the Git repository that contains a HTTP service. The HTTP service must be exposed on the default port (8080) unless otherwise configured with the port field. Only public repositories on GitHub, GitLab and Bitbucket are currently supported: - https://github.com/<org>/<repository> - https://gitlab.com/<org>/<repository> - https://bitbucket.org/<org>/<repository> The url must have a maximum length of 256 characters. 8.1.7. .spec.source.gitImport.service Description service contains configuration for the Service resource created for this sample. Type object Property Type Description targetPort integer targetPort is the port that the service listens on for HTTP requests. This port will be used for Service created for this sample. Port must be in the range 1 to 65535. Default port is 8080. 8.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolesamples DELETE : delete collection of ConsoleSample GET : list objects of kind ConsoleSample POST : create a ConsoleSample /apis/console.openshift.io/v1/consolesamples/{name} DELETE : delete a ConsoleSample GET : read the specified ConsoleSample PATCH : partially update the specified ConsoleSample PUT : replace the specified ConsoleSample 8.2.1. /apis/console.openshift.io/v1/consolesamples HTTP method DELETE Description delete collection of ConsoleSample Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleSample Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleSample Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body ConsoleSample schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 201 - Created ConsoleSample schema 202 - Accepted ConsoleSample schema 401 - Unauthorized Empty 8.2.2. /apis/console.openshift.io/v1/consolesamples/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the ConsoleSample HTTP method DELETE Description delete a ConsoleSample Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleSample Table 8.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleSample Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleSample Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body ConsoleSample schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleSample schema 201 - Created ConsoleSample schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/consolesample-console-openshift-io-v1
|
Deploying OpenShift Data Foundation using Microsoft Azure
|
Deploying OpenShift Data Foundation using Microsoft Azure Red Hat OpenShift Data Foundation 4.9 Instructions on deploying OpenShift Data Foundation using Microsoft Azure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Microsoft Azure.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_microsoft_azure/index
|
function::assert
|
function::assert Name function::assert - evaluate assertion Synopsis Arguments expression The expression to evaluate msg The formatted message string Description This function checks the expression and aborts the current running probe if expression evaluates to zero. Uses error and may be caught by try{} catch{}.
|
[
"assert(expression:,msg:)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-assert
|
Chapter 11. Monitoring project and application metrics using the Developer perspective
|
Chapter 11. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 11.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 11.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure Go to Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Note In the Dashboard list, the Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 11.1. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 11.2. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 11.3. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 11.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 11.4. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 11.4. Image vulnerabilities breakdown In the Developer perspective, the project dashboard shows the Image Vulnerabilities link in the Status section. Using this link, you can view the Image Vulnerabilities breakdown window, which includes details regarding vulnerable container images and fixable container images. The icon color indicates severity: Red: High priority. Fix immediately. Orange: Medium priority. Can be fixed after high-priority vulnerabilities. Yellow: Low priority. Can be fixed after high and medium-priority vulnerabilities. Based on the severity level, you can prioritize vulnerabilities and fix them in an organized manner. Figure 11.5. Viewing image vulnerabilities 11.5. Monitoring your application and image vulnerabilities metrics After you create applications in your project and deploy them, use the Developer perspective in the web console to see the metrics for your application dependency vulnerabilities across your cluster. The metrics help you to analyze the following image vulnerabilities in detail: Total count of vulnerable images in a selected project Severity-based counts of all vulnerable images in a selected project Drilldown into severity to obtain the details, such as count of vulnerabilities, count of fixable vulnerabilities, and number of affected pods for each vulnerable image Prerequisites You have installed the Red Hat Quay Container Security operator from the Operator Hub. Note The Red Hat Quay Container Security operator detects vulnerabilities by scanning the images that are in the quay registry. Procedure For a general overview of the image vulnerabilities, on the navigation panel of the Developer perspective, click Project to see the project dashboard. Click Image Vulnerabilities in the Status section. The window that opens displays details such as Vulnerable Container Images and Fixable Container Images . For a detailed vulnerabilities overview, click the Vulnerabilities tab on the project dashboard. To get more detail about an image, click its name. View the default graph with all types of vulnerabilities in the Details tab. Optional: Click the toggle button to view a specific type of vulnerability. For example, click App dependency to see vulnerabilities specific to application dependency. Optional: You can filter the list of vulnerabilities based on their Severity and Type or sort them by Severity , Package , Type , Source , Current Version , and Fixed in Version . Click a Vulnerability to get its associated details: Base image vulnerabilities display information from a Red Hat Security Advisory (RHSA). App dependency vulnerabilities display information from the Snyk security application. 11.6. Additional resources Monitoring overview
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective
|
Chapter 5. Running Red Hat build of Keycloak in a container
|
Chapter 5. Running Red Hat build of Keycloak in a container This chapter describes how to optimize and run the Red Hat build of Keycloak container image to provide the best experience running a container. Warning This chapter applies only for building an image that you run in a OpenShift environment. Only an OpenShift environment is supported for this image. It is not supported if you run it in other Kubernetes distributions. 5.1. Creating a customized and optimized container image The default Red Hat build of Keycloak container image ships ready to be configured and optimized. For the best start up of your Red Hat build of Keycloak container, build an image by running the build step during the container build. This step will save time in every subsequent start phase of the container image. 5.1.1. Writing your optimized Red Hat build of Keycloak Containerfile The following Containerfile creates a pre-configured Red Hat build of Keycloak image that enables the health and metrics endpoints, enables the token exchange feature, and uses a PostgreSQL database. Containerfile: FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder # Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak # for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ # change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT ["/opt/keycloak/bin/kc.sh"] The build process includes multiple stages: Run the build command to set server build options to create an optimized image. The files generated by the build stage are copied into a new image. In the final image, additional configuration options for the hostname and database are set so that you don't need to set them again when running the container. In the entrypoint, the kc.sh enables access to all the distribution sub-commands. To install custom providers, you just need to define a step to include the JAR file(s) into the /opt/keycloak/providers directory. This step must be placed before the line that RUNs the build command, as below: # A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder ... # Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar ... # Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build 5.1.2. Installing additional RPM packages If you try to install new software in a stage FROM registry.redhat.io/rhbk/keycloak-rhel9 , you will notice that microdnf , dnf , and even rpm are not installed. Also, very few packages are available, only enough for a bash shell, and to run Red Hat build of Keycloak itself. This is due to security hardening measures, which reduce the attack surface of the Red Hat build of Keycloak container. First, consider if your use case can be implemented in a different way, and so avoid installing new RPMs into the final container: A RUN curl instruction in your Containerfile can be replaced with ADD , since that instruction natively supports remote URLs. Some common CLI tools can be replaced by creative use of the Linux filesystem. For example, ip addr show tap0 becomes cat /sys/class/net/tap0/address Tasks that need RPMs can be moved to a former stage of an image build, and the results copied across instead. Here is an example. Running update-ca-trust in a former build stage, then copying the result forward: FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki It is possible to install new RPMs if absolutely required, following this two-stage pattern established by ubi-micro: FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && \ dnf --installroot /mnt/rootfs clean all && \ rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs / This approach uses a chroot, /mnt/rootfs , so that only the packages you specify and their dependencies are installed, and so can be easily copied into the second stage without guesswork. Warning Some packages have a large tree of dependencies. By installing new RPMs you may unintentionally increase the container's attack surface. Check the list of installed packages carefully. 5.1.3. Building the container image To build the actual container image, run the following command from the directory containing your Containerfile: podman build . -t mykeycloak 5.1.4. Starting the optimized Red Hat build of Keycloak container image To start the image, run: podman run --name mykeycloak -p 8443:8443 -p 9000:9000 \ -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \ mykeycloak \ start --optimized --hostname=localhost Red Hat build of Keycloak starts in production mode, using only secured HTTPS communication, and is available on https://localhost:8443 . Health check endpoints are available at https://localhost:9000/health , https://localhost:9000/health/ready and https://localhost:9000/health/live . Opening up https://localhost:9000/metrics leads to a page containing operational metrics that could be used by your monitoring solution. 5.2. Exposing the container to a different port By default, the server is listening for http and https requests using the ports 8080 and 8443 , respectively. If you want to expose the container using a different port, you need to set the hostname accordingly: Exposing the container using a port other than the default ports By setting the hostname option to a full url you can now access the server at https://localhost:3000 . 5.3. Trying Red Hat build of Keycloak in development mode The easiest way to try Red Hat build of Keycloak from a container for development or testing purposes is to use the Development mode. You use the start-dev command: podman run --name mykeycloak -p 8080:8080 \ -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \ registry.redhat.io/rhbk/keycloak-rhel9:26 \ start-dev Invoking this command starts the Red Hat build of Keycloak server in development mode. This mode should be strictly avoided in production environments because it has insecure defaults. For more information about running Red Hat build of Keycloak in production, see Configuring Red Hat build of Keycloak for production . 5.4. Running a standard Red Hat build of Keycloak container In keeping with concepts such as immutable infrastructure, containers need to be re-provisioned routinely. In these environments, you need containers that start fast, therefore you need to create an optimized image as described in the preceding section. However, if your environment has different requirements, you can run a standard Red Hat build of Keycloak image by just running the start command. For example: podman run --name mykeycloak -p 8080:8080 \ -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \ registry.redhat.io/rhbk/keycloak-rhel9:26 \ start \ --db=postgres --features=token-exchange \ --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> \ --https-key-store-file=<file> --https-key-store-password=<password> Running this command starts a Red Hat build of Keycloak server that detects and applies the build options first. In the example, the line --db=postgres --features=token-exchange sets the database vendor to PostgreSQL and enables the token exchange feature. Red Hat build of Keycloak then starts up and applies the configuration for the specific environment. This approach significantly increases startup time and creates an image that is mutable, which is not the best practice. 5.5. Provide initial admin credentials when running in a container Red Hat build of Keycloak only allows to create the initial admin user from a local network connection. This is not the case when running in a container, so you have to provide the following environment variables when you run the image: # setting the admin username -e KC_BOOTSTRAP_ADMIN_USERNAME=<admin-user-name> # setting the initial password -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me 5.6. Importing A Realm On Startup The Red Hat build of Keycloak containers have a directory /opt/keycloak/data/import . If you put one or more import files in that directory via a volume mount or other means and add the startup argument --import-realm , the Red Hat build of Keycloak container will import that data on startup! This may only make sense to do in Dev mode. podman run --name keycloak_unoptimized -p 8080:8080 \ -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \ -v /path/to/realm/data:/opt/keycloak/data/import \ registry.redhat.io/rhbk/keycloak-rhel9:26 \ start-dev --import-realm Feel free to join the open GitHub Discussion around enhancements of the admin bootstrapping process. 5.7. Specifying different memory settings The Red Hat build of Keycloak container, instead of specifying hardcoded values for the initial and maximum heap size, uses relative values to the total memory of a container. This behavior is achieved by JVM options -XX:MaxRAMPercentage=70 , and -XX:InitialRAMPercentage=50 . The -XX:MaxRAMPercentage option represents the maximum heap size as 70% of the total container memory. The -XX:InitialRAMPercentage option represents the initial heap size as 50% of the total container memory. These values were chosen based on a deeper analysis of Red Hat build of Keycloak memory management. As the heap size is dynamically calculated based on the total container memory, you should always set the memory limit for the container. Previously, the maximum heap size was set to 512 MB, and in order to approach similar values, you should set the memory limit to at least 750 MB. For smaller production-ready deployments, the recommended memory limit is 2 GB. The JVM options related to the heap might be overridden by setting the environment variable JAVA_OPTS_KC_HEAP . You can find the default values of the JAVA_OPTS_KC_HEAP in the source code of the kc.sh , or kc.bat script. For example, you can specify the environment variable and memory limit as follows: podman run --name mykeycloak -p 8080:8080 -m 1g \ -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me \ -e JAVA_OPTS_KC_HEAP="-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65" \ registry.redhat.io/rhbk/keycloak-rhel9:26 \ start-dev Warning If the memory limit is not set, the memory consumption rapidly increases as the heap size can grow up to 70% of the total container memory. Once the JVM allocates the memory, it is returned to the OS reluctantly with the current Red Hat build of Keycloak GC settings. 5.8. Relevant options Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api[:v1] , account[:v3] , admin-api[:v1] , admin-fine-grained-authz[:v1] , admin[:v2] , authorization[:v1] , cache-embedded-remote-store[:v1] , ciba[:v1] , client-policies[:v1] , client-secret-rotation[:v1] , client-types[:v1] , clusterless[:v1] , declarative-ui[:v1] , device-flow[:v1] , docker[:v1] , dpop[:v1] , dynamic-scopes[:v1] , fips[:v1] , hostname[:v2] , impersonation[:v1] , kerberos[:v1] , login[:v2,v1] , multi-site[:v1] , oid4vc-vci[:v1] , opentelemetry[:v1] , organization[:v1] , par[:v1] , passkeys[:v1] , persistent-user-sessions[:v1] , preview , recovery-codes[:v1] , scripts[:v1] , step-up-authentication[:v1] , token-exchange[:v1] , transient-users[:v1] , update-email[:v1] , web-authn[:v1] hostname Address at which is the server exposed. Can be a full URL, or just a hostname. When only hostname is provided, scheme, port and context path are resolved from the request. CLI: --hostname Env: KC_HOSTNAME Available only when hostname:v2 feature is enabled https-key-store-file The key store which holds the certificate information instead of specifying separate files. CLI: --https-key-store-file Env: KC_HTTPS_KEY_STORE_FILE https-key-store-password The password of the key store file. CLI: --https-key-store-password Env: KC_HTTPS_KEY_STORE_PASSWORD password (default) health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default) metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default)
|
[
"FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /",
"build . -t mykeycloak",
"run --name mykeycloak -p 8443:8443 -p 9000:9000 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=localhost",
"run --name mykeycloak -p 3000:8443 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=https://localhost:3000",
"run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev",
"run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>",
"setting the admin username -e KC_BOOTSTRAP_ADMIN_USERNAME=<admin-user-name> setting the initial password -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me",
"run --name keycloak_unoptimized -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev --import-realm",
"run --name mykeycloak -p 8080:8080 -m 1g -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/containers-
|
Chapter 18. Managing RAID
|
Chapter 18. Managing RAID You can use a Redundant Array of Independent Disks (RAID) to store data across multiple drives. It can help to avoid data loss if a drive has failed. 18.1. Overview of RAID In a RAID, multiple devices, such as HDD, SSD, or NVMe are combined into an array to accomplish performance or redundancy goals not achievable with one large and expensive drive. This array of devices appears to the computer as a single logical storage unit or drive. RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Levels 4, 5 and 6) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes. RAID distributes data across each device in the array by breaking it down into consistently-sized chunks, commonly 256 KB or 512 KB, although other values are acceptable. It writes these chunks to a hard drive in the RAID array according to the RAID level employed. While reading the data, the process is reversed, giving the illusion that the multiple devices in the array are actually one large drive. RAID technology is beneficial for those who manage large amounts of data. The following are the primary reasons to deploy RAID: It enhances speed It increases storage capacity using a single virtual disk It minimizes data loss from disk failure The RAID layout and level online conversion 18.2. RAID types The following are the possible types of RAID: Firmware RAID Firmware RAID, also known as ATARAID, is a type of software RAID where the RAID sets can be configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats to mark the RAID set members. The Intel Matrix RAID is an example of a firmware RAID system. Hardware RAID A hardware-based array manages the RAID subsystem independently from the host. It might present multiple devices per RAID array to the host. Hardware RAID devices might be internal or external to the system. Internal devices commonly consists of a specialized controller card that handles the RAID tasks transparently to the operating system. External devices commonly connect to the system via SCSI, Fibre Channel, iSCSI, InfiniBand, or other high speed network interconnect and present volumes such as logical units to the system. RAID controller cards function like a SCSI controller to the operating system and handle all the actual drive communications. You can plug the drives into the RAID controller similar to a normal SCSI controller and then add them to the RAID controller's configuration. The operating system will not be able to tell the difference. Software RAID A software RAID implements the various RAID levels in the kernel block device code. It offers the cheapest possible solution because expensive disk controller cards or hot-swap chassis are not required. With hot-swap chassis, you can remove a hard drive without powering off your system. Software RAID also works with any block storage, which are supported by the Linux kernel, such as SATA, SCSI, and NVMe. With today's faster CPUs, Software RAID also generally outperforms hardware RAID, unless you use high-end storage devices. Since the Linux kernel contains a multiple device (MD) driver, the RAID solution becomes completely hardware independent. The performance of a software-based array depends on the server CPU performance and load. The following are the key features of the Linux software RAID stack: Multithreaded design Portability of arrays between Linux machines without reconstruction Backgrounded array reconstruction using idle system resources Hot-swap drive support Automatic CPU detection to take advantage of certain CPU features such as streaming Single Instruction Multiple Data (SIMD) support. Automatic correction of bad sectors on disks in an array. Regular consistency checks of RAID data to ensure the health of the array. Proactive monitoring of arrays with email alerts sent to a designated email address on important events. Write-intent bitmaps, which drastically increase the speed of resync events by allowing the kernel to know precisely which portions of a disk need to be resynced instead of having to resync the entire array after a system crash. Note The resync is a process to synchronize the data over the devices in the existing RAID to achieve redundancy. Resync checkpointing so that if you reboot your computer during a resync, at startup the resync resumes where it left off and not starts all over again. The ability to change parameters of the array after installation, which is called reshaping. For example, you can grow a 4-disk RAID5 array to a 5-disk RAID5 array when you have a new device to add. This grow operation is done live and does not require you to reinstall on the new array. Reshaping supports changing the number of devices, the RAID algorithm or size of the RAID array type, such as RAID4, RAID5, RAID6, or RAID10. Takeover supports RAID level conversion, such as RAID0 to RAID6. Cluster MD, which is a storage solution for a cluster, provides the redundancy of RAID1 mirroring to the cluster. Currently, only RAID1 is supported. 18.3. RAID levels and linear support The following are the supported configurations by RAID, including levels 0, 1, 4, 5, 6, 10, and linear: Level 0 RAID level 0, often called striping, is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into stripes and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. RAID level 0 implementations only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device gets treated as though it was the same size as the smallest drive. Therefore, the common storage capacity of a level 0 array is the total capacity of all disks. If the member disks have a different size, then the RAID0 uses all the space of those disks using the available zones. Level 1 RAID level 1, or mirroring, provides redundancy by writing identical data to each member disk of the array, leaving a mirrored copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks, and provides very good data reliability and improves performance for read-intensive applications but at relatively high costs. RAID level 1 is costly because you write the same information to all of the disks in the array, which provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit, which is parity-based RAID levels that consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities. The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a hardware RAID or the smallest mirrored partition in a software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present. Level 4 Level 4 uses parity concentrated on a single disk drive to protect data. Parity information is calculated based on the content of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced. Since the dedicated parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used without accompanying technologies such as write-back caching. Or it is used in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind such as an array that has little to no write transactions once the array is populated with data. RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if needed. The storage capacity of hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one. The performance of a RAID level 4 array is always asymmetrical, which means reads outperform writes. This is because write operations consume extra CPU resources and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are not only writing the data, but also the parity. Read operations need only read the data and not the parity unless the array is in a degraded state. As a result, read operations generate less traffic to the drives and across the buses of the computer for the same amount of data transfer under normal operating conditions. Level 5 This is the most common type of RAID. By distributing parity across all the member disk drives of an array, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process itself. Modern CPUs can calculate parity very fast. However, if you have a large number of disks in a RAID 5 array such that the combined aggregate data transfer speed across all devices is high enough, parity calculation can be a bottleneck. Level 5 has asymmetrical performance, and reads substantially outperforming writes. The storage capacity of RAID level 5 is calculated the same way as with level 4. Level 6 This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5. The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract two devices instead of one from the device count for the extra parity storage space. Level 10 This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also reduces some of the space wasted in level 1 arrays with more than two devices. With level 10, it is possible, for example, to create a 3-drive array configured to store only two copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device, similar to a 3-device, level 1 array. This avoids CPU process usage to calculate parity similar to RAID level 6, but it is less space efficient. The creation of RAID level 10 is not supported during installation. It is possible to create one manually after installation. Linear RAID Linear RAID is a grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split between member drives. Linear RAID also offers no redundancy and decreases reliability. If any one member drive fails, the entire array cannot be used and data can be lost. The capacity is the total of all member disks. 18.4. Supported RAID conversions It is possible to convert from one RAID level to another. For example, you can convert from RAID5 to RAID10, but not from RAID10 to RAID5. The following table describes the supported RAID conversions: RAID conversion levels Conversion steps Notes RAID level 0 to RAID level 4 You need to add a disk to the MD array because it require at least 3 disks. RAID level 0 to RAID level 5 You need to add a disk to the MD array because it require at least 3 disks. RAID level 0 to RAID level 10 You need to add two extra disks to the MD array. RAID level 1 to RAID level 0 RAID level 1 to RAID level 5 RAID level 4 to RAID level 0 RAID level 4 to RAID level 5 RAID level 5 to RAID level 0 RAID level 5 to RAID level 1 RAID level 5 to RAID level 4 RAID level 5 to RAID level 6 RAID level 5 to RAID level 10 Converting RAID level 5 to RAID level 10 is a two step conversion: Convert to RAID level 0 Add two additional disks while converting to RAID10. RAID level 6 to RAID level 5 RAID level 10 to RAID level 0 Note Converting RAID 5 to RAID0 and RAID4 is only possible with the ALGORITHM_PARITY_N layout. After converting a RAID level, verify the conversion by using either the mdadm --detail /dev/md0 or cat /proc/mdstat command. Additional resources mdadm(8) man page on your system 18.5. RAID subsystems The following subsystems compose RAID: Hardware RAID Controller Drivers Hardware RAID controllers have no specific RAID subsystem. Since they use special RAID chipsets, hardware RAID controllers come with their own drivers. With these drivers, the system detects the RAID sets as regular disks. mdraid The mdraid subsystem was designed as a software RAID solution. It is also the preferred solution for software RAID in Red Hat Enterprise Linux. This subsystem uses its own metadata format, which is referred to as native MD metadata. It also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 9 uses mdraid with external metadata to access Intel Rapid Storage (ISW) or Intel Matrix Storage Manager (IMSM) sets and Storage Networking Industry Association (SNIA) Disk Drive Format (DDF). The mdraid subsystem sets are configured and controlled through the mdadm utility. 18.6. Creating a software RAID during the installation Redundant Arrays of Independent Disks (RAID) devices are constructed from multiple storage devices that are arranged to provide increased performance and, in some configurations, greater fault tolerance. A RAID device is created in one step and disks are added or removed as necessary. You can configure one RAID partition for each physical disk in your system, so that the number of disks available to the installation program determines the levels of RAID device available. For example, if your system has two disks, you cannot create a RAID 10 device, as it requires a minimum of three separate disks. To optimize your system's storage performance and reliability, RHEL supports software RAID 0 , RAID 1 , RAID 4 , RAID 5 , RAID 6 , and RAID 10 types with LVM and LVM Thin Provisioning to set up storage on the installed system. Note On 64-bit IBM Z, the storage subsystem uses RAID transparently. You do not have to configure software RAID manually. Prerequisites You have selected two or more disks for installation before RAID configuration options are visible. Depending on the RAID type you want to create, at least two disks are required. You have created a mount point. By configuring a mount point, you can configure the RAID device. You have selected the Custom radio button on the Installation Destination window. Procedure From the left pane of the Manual Partitioning window, select the required partition. Under the Device(s) section, click Modify . The Configure Mount Point dialog box opens. Select the disks that you want to include in the RAID device and click Select . Click the Device Type drop-down menu and select RAID . Click the File System drop-down menu and select your preferred file system type. Click the RAID Level drop-down menu and select your preferred level of RAID. Click Update Settings to save your changes. Click Done to apply the settings to return to the Installation Summary window. Additional resources Creating a RAID LV with DM integrity Managing RAID 18.7. Creating a software RAID on an installed system You can create a software Redundant Array of Independent Disks (RAID) on an existing system using the mdadm utility. Prerequisites The mdadm package is installed. You have created two or more partitions on your system. For detailed instructions, see Creating a partition with parted . Procedure Create a RAID of two block devices, for example /dev/sda1 and /dev/sdc1 : The level_value option defines the RAID level. Optional: Check the status of the RAID: Optional: Observe the detailed information about each device in the RAID: Create a file system on the RAID drive: Replace xfs with the file system that you chose to format the drive with. Create a mount point for RAID drive and mount it: Replace /mnt/raid1 with the mount point. If you want that RHEL mounts the md0 RAID device automatically when the system boots, add an entry for your device to the /etc/fstab file : 18.8. Creating RAID in the web console Configure RAID in the RHEL 9 web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the cockpit-storaged package on your system. You have connected physical disks and they are visible by the system. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the menu button and select Create MDRAID device . In the Create RAID Device field, enter a name for the new RAID. In the RAID Level drop-down list, select a level of RAID you want to use. From the Chunk Size drop-down list, select the size from the list of available options. The Chunk Size value specifies how large each block is for data writing. For example, if the chunk size is 512 KiB, the system writes the first 512 KiB to the first disk, the second 512 KiB is written to the second disk, and the third chunk is written to the third disk. If you have three disks in your RAID, the fourth 512 KiB is written to the first disk again. Select the disks you want to use for RAID. Click Create . Verification Go to the Storage section and check that you can see the new RAID in the RAID devices box. 18.9. Formatting RAID in the web console You can format and mount software RAID devices in the RHEL 9 web console. Formatting can take several minutes depending on the volume size and which formatting options are selected. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the cockpit-storaged package. You have connected physical disks and they are visible by the system. You have created RAID. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the menu button ... for the RAID device that you want to format. From the drop-down menu, select Format . In the Format field, enter a name. In the Mount Point field, add the mount path. From the Type drop-down list, select the type of file system. Optional: Check the Overwrite existing data with zeros option, if the disk includes any sensitive data and you want to overwrite them. Otherwise the RHEL web console rewrites only the disk header. In the Encryption drop-down menu, select the type of encryption. If you do not want to encrypt the volume, select No encryption . In the At boot drop-down menu, select when you want to mount the volume. In the Mount options section: If you want the to mount the volume as a read-only logical volume, select the Mount read only checkbox. If you want to change the default mount option, select the Custom mount options checkbox and add the mount options. Format the RAID partition: If you want to format and mount the partition, click the Format and mount button. If you want to only format the partition, click the Format only button. Verification After the formatting has completed successfully, you can see the details of the formatted logical volume in the Storage table on the Storage page. 18.10. Creating a partition table on RAID by using the web console Format RAID with the partition table on the new software RAID device created in the RHEL 9 interface. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the cockpit-storaged package. You have connected physical disks and they are visible by the system. You have created RAID. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the RAID device on which you want to create a partition table. Click the menu button ... in the MDRAID device section. From the drop-down menu, select Create partition table . In the Initialize disk dialog box, select the following: Partitioning : If the partition should be compatible with all systems and devices, select MBR . If the partition should be compatible with modern system and hard disks must be greater than 2 TB, select GPT . If you do not need partitioning, select No partitioning . Overwrite : Check the Overwrite existing data with zeros option if the disk includes any sensitive data and you want to overwrite them. Otherwise the RHEL web console rewrites only the disk header. Click Initialize . 18.11. Creating partitions on RAID by using the web console Create a partition in the existing partition table. You can create more partitions after the partition is created. Prerequisites The RHEL 9 web console is installed and accessible. For details, see Installing the web console . The cockpit-storaged package is installed on your system. A partition table on RAID is created. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . Click the RAID device on which you want to create a partition. On the RAID device page, scroll to the GPT partitions section and click the menu button [...]. Click Create partition and enter a name for the file system in the Create partition field. Do not use spaces in the name. In the Mount Point field, enter the mount path. In the Type drop-down list, select the type of file system. In the Size slider, set the size of the partition. Optional: Select Overwrite existing data with zeros , if the disk includes any sensitive data and you want to overwrite them. Otherwise the RHEL web console rewrites only the disk header. In the Encryption drop-down menu, select the type of encryption. If you do not want to encrypt the volume, select No encryption . In the At boot drop-down menu, select when you want to mount the volume. In the Mount options section: If you want to mount the volume as a read-only logical volume, select the Mount read only checkbox. If you want to change the default mount option, select the Custom mount options checkbox and add the mount options. Create the partition: If you want to create and mount the partition, click the Create and mount button. If you want to only create the partition, click the Create only button. Formatting can take several minutes depending on the volume size and which formatting options are selected. Verification You can see the details of the formatted logical volume in the Storage table on the main storage page. 18.12. Creating a volume group on top of RAID by using the web console Build a volume group from software RAID. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . You have installed the cockpit-storaged package. You have a RAID device that is not formatted and not mounted. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the menu button [...] and select Create LVM2 volume group . In the Create LVM2 volume group field, enter a name for the new volume group. From the Disks list, select a RAID device. If you do not see the RAID in the list, unmount the RAID from the system. The RAID device must not be in use by the RHEL 9 system. Click Create . 18.13. Configuring a RAID volume by using the storage RHEL system role With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements. Warning Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Persistent naming attributes . Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify that the array was correctly created: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory 18.14. Extending RAID You can extend a RAID using the --grow option of the mdadm utility. Prerequisites Enough disk space. The parted package is installed. Procedure Extend RAID partitions. For more information, see Resizing a partition with parted . Extend RAID to the maximum of the partition capacity: To set a specific size, write the value of the --size parameter in kB, for example --size= 524228 . Increase the size of file system. For example, if the volume uses XFS and is mounted to /mnt/ , enter: Additional resources mdadm(8) man page on your system Managing file systems 18.15. Shrinking RAID You can shrink RAID using the --grow option of the mdadm utility. Important The XFS file system does not support shrinking. Prerequisites The parted package is installed. Procedure Shrink the file system. For more information, see Managing file systems . Decrease the RAID to the size, for example to 512 MB : Write the --size parameter in kB. Shrink the partition to the size you need. Additional resources mdadm(8) man page on your system Resizing a partition with parted 18.16. Converting a root disk to RAID1 after installation You can convert a non-RAID root disk to a RAID1 mirror after installing Red Hat Enterprise Linux 9. On the PowerPC (PPC) architecture, take the following additional steps: Prerequisites Completed the steps in the Red Hat Knowledgebase solution How do I convert my root disk to RAID1 after installation of Red Hat Enterprise Linux 7? . Note Executing the grub2-install /dev/sda command does not work on a PowerPC machine and returns an error, but the system boots as expected. Procedure Copy the contents of the PowerPC Reference Platform (PReP) boot partition from /dev/sda1 to /dev/sdb1 : Update the prep and boot flag on the first partition on both disks: 18.17. Creating advanced RAID devices In some cases, you might want to install the operating system on an array that is created before the installation completes. Usually, this means setting up the /boot or root file system arrays on a complex RAID device. In such cases, you might need to use array options that are not supported by the Anaconda installer. To work around this, perform the following steps. Note The limited Rescue Mode of the installer does not include man pages. Both the mdadm and md man pages contain useful information for creating custom RAID arrays, and might be needed throughout the workaround. Procedure Insert the install disk. During the initial boot up, select Rescue Mode instead of Install or Upgrade . When the system fully boots into Rescue mode , you can see the command line terminal. From this terminal, execute the following commands: Create RAID partitions on the target hard drives by using the parted command. Manually create raid arrays by using the mdadm command from those partitions using any and all settings and options available. Optional: After creating arrays, create file systems on the arrays as well. Reboot the computer and select Install or Upgrade to install. As the Anaconda installer searches the disks in the system, it finds the pre-existing RAID devices. When asked about how to use the disks in the system, select Custom Layout and click . In the device listing, the pre-existing MD RAID devices are listed. Select a RAID device and click Edit . Configure its mount point and optionally the type of file system it should use if you did not create one earlier, and then click Done . Anaconda installs to this pre-existing RAID device, preserving the custom options you selected when you created it in Rescue Mode. 18.18. Setting up email notifications to monitor a RAID You can set up email alerts to monitor RAID with the mdadm tool. Once the MAILADDR variable is set to the required email address, the monitoring system sends the alerts to the added email address. Prerequisites The mdadm package is installed. The mail service is set up. Procedure Create the /etc/mdadm.conf configuration file for monitoring array by scanning the RAID details: Note, that ARRAY and MAILADDR are mandatory variables. Open the /etc/mdadm.conf configuration file with a text editor of your choice and add the MAILADDR variable with the mail address for the notification. For example, add new line: Here, [email protected] is an email address to which you want to receive the alerts from the array monitoring. Save changes in the /etc/mdadm.conf file and close it. Additional resources mdadm.conf(5) man page on your system 18.19. Replacing a failed disk in RAID You can reconstruct the data from the failed disks using the remaining disks. RAID level and the total number of disks determines the minimum amount of remaining disks needed for a successful data reconstruction. In this procedure, the /dev/md0 RAID contains four disks. The /dev/sdd disk has failed and you need to replace it with the /dev/sdf disk. Prerequisites A spare disk for replacement. The mdadm package is installed. Procedure Check the failed disk: View the kernel logs: Search for a message similar to the following: Press Ctrl + C on your keyboard to exit the journalctl program. Mark the failed disk as faulty: Optional: Check if the failed disk was marked correctly: At the end of the output is a list of disks in the /dev/md0 RAID where the disk /dev/sdd has the faulty status: Remove the failed disk from the RAID: Warning If your RAID cannot withstand another disk failure, do not remove any disk until the new disk has the active sync status. You can monitor the progress using the watch cat /proc/mdstat command. Add the new disk to the RAID: The /dev/md0 RAID now includes the new disk /dev/sdf and the mdadm service will automatically starts copying data to it from other disks. Verification Check the details of the array: If this command shows a list of disks in the /dev/md0 RAID where the new disk has spare rebuilding status at the end of the output, data is still being copied to it from other disks: After data copying is finished, the new disk has an active sync status. Additional resources Setting up email notifications to monitor a RAID 18.20. Repairing RAID disks You can repair disks in a RAID array by using the repair option. Prerequisites The mdadm package is installed. Procedure Check the array for the failed disks behavior: This checks the array and the /sys/block/md0/md/sync_action file shows the sync action. Open the /sys/block/md0/md/sync_action file with the text editor of your choice and see if there is any message about disk synchronization failures. View the /sys/block/md0/md/mismatch_cnt file. If the mismatch_cnt parameter is not 0 , it means that the RAID disks need repair. Repair the disks in the array: This repairs the disks in the array and writes the result into the /sys/block/md0/md/sync_action file. View the synchronization progress:
|
[
"mdadm --grow /dev/md0 --level=4 -n3 --add /dev/vdd",
"mdadm --grow /dev/md0 --level=5 -n3 --add /dev/vdd",
"mdadm --grow /dev/md0 --level 10 -n 4 --add /dev/vd[ef]",
"mdadm --grow /dev/md0 -l0",
"mdadm --grow /dev/md0 --level=5",
"mdadm --grow /dev/md0 --level=0",
"mdadm --grow /dev/md0 --level=5",
"mdadm --grow /dev/md0 --level=0",
"mdadm -CR /dev/md0 -l5 -n3 /dev/sd[abc] --assume-clean --size 1G mdadm -D /dev/md0 | grep Level mdadm --grow /dev/md0 --array-size 1048576 mdadm --grow -n 2 /dev/md0 --backup=internal mdadm --grow -l1 /dev/md0 mdadm -D /dev/md0 | grep Level",
"mdadm --grow /dev/md0 --level=4",
"mdadm --grow /dev/md0 --level=6 --add /dev/vde",
"mdadm --grow /dev/md0 --level=0 # mdadm --grow /dev/md0 --level=10 --add /dev/vde /dev/vdf",
"mdadm --grow /dev/md0 --level=5",
"mdadm --grow /dev/md0 --level=0",
"mdadm --create /dev/md0 --level= 0 --raid-devices=2 /dev/sda1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.",
"mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 13 15:17:39 2022 Raid Level : raid0 Array Size : 18649600 (17.79 GiB 19.10 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Oct 13 15:17:39 2022 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 [...]",
"mdadm --examine /dev/sda1 /dev/sdc1 /dev/sda1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1000 Array UUID : 77ddfb0a:41529b0e:f2c5cde1:1d72ce2c Name : 0 Creation Time : Thu Oct 13 15:17:39 2022 Raid Level : raid0 Raid Devices : 2 [...]",
"mkfs -t xfs /dev/md0",
"mkdir /mnt/raid1 mount /dev/md0 /mnt/raid1",
"/dev/md0 /mnt/raid1 xfs defaults 0 0",
"--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'",
"mdadm --grow --size=max /dev/md0",
"xfs_growfs /mnt/",
"mdadm --grow --size= 524228 /dev/md0",
"dd if= /dev/sda1 of= /dev/sdb1",
"parted /dev/sda set 1 prep on parted /dev/sda set 1 boot on parted /dev/sdb set 1 prep on parted /dev/sdb set 1 boot on",
"mdadm --detail --scan >> /etc/mdadm.conf",
"MAILADDR [email protected]",
"journalctl -k -f",
"md/raid:md0: Disk failure on sdd, disabling device. md/raid:md0: Operation continuing on 3 devices.",
"mdadm --manage /dev/md0 --fail /dev/sdd",
"mdadm --detail /dev/md0",
"Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc - 0 0 2 removed 3 8 64 3 active sync /dev/sde 2 8 48 - faulty /dev/sdd",
"mdadm --manage /dev/md0 --remove /dev/sdd",
"mdadm --manage /dev/md0 --add /dev/sdf",
"mdadm --detail /dev/md0",
"Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 80 2 spare rebuilding /dev/sdf 3 8 64 3 active sync /dev/sde",
"echo check > /sys/block/ md0 /md/sync_action",
"echo repair > /sys/block/ md0 /md/sync_action",
"cat /sys/block/ md0 /md/sync_action repair cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] [raid1] md0 : active raid1 sdg[1] dm-3[0] 511040 blocks super 1.2 [2/2] [UU] unused devices: <none>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.