title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
15.12. Managing Deleted Entries with Replication | 15.12. Managing Deleted Entries with Replication When an entry is deleted, it is not immediately removed from the database. Rather, it is converted into a tombstone entry, a kind of backup entry that is used by servers in replication to resolve specific conflicts (orphaned entries). The tombstone entry is the original entry with a modified DN, an added nsTombstone object class, but the attributes are removed from the index. Tombstones are not preserved indefinitely. A purge job is run periodically, at a specified interval (set in the nsDS5ReplicaTombstonePurgeInterval attribute); the purge removes old tombstone entries. Tombstone entries are saved for a given amount of time (set in the nsDS5ReplicaPurgeDelay attribute); once a tombstone entry is older than the delay period, it is reaped at the purge job. Both the purge delay and the purge interval are set on the replica entry in the cn=replica,cn= replicated suffix ,cn=mapping tree,cn=config configuration entry. There are two considerations when defining the purge settings for replication: The purge operation is time-consuming, especially if the server handles a lot of delete operations. Do not set the purge interval too low or it could consume too many server resources and affect performance. Suppliers use change information, including tombstone entries, to prime replication after initialization. There should be enough of a backlog of changes to effectively re-initialize consumers and to resolve replication conflicts. Do not set the purge delay (the age of tombstone entries) too low or you could lose information required to resolve replication conflicts. Set the purge delay so that it is slightly longer than the longest replication schedule in the replication topology. For example, if the longest replication interval is 24 hours, keep tombstone entries around for 25 hours. This ensures that there is enough change history to initialize consumers and prevent the data stored in different suppliers from diverging. When you use the dsconf replication set command, the --repl-tombstone-purge-interval= seconds option sets the nsDS5ReplicaTombstonePurgeInterval attribute and the --repl-purge-delay= seconds option the nsDS5ReplicaPurgeDelay attribute. For example, to set the tombstone purge interval to 43200 (12 hours) and the replica purge delay to 90000 (25 hours): Note To clean up the tombstone entries and the state information immediately, set a very small value to the nsDS5ReplicaTombstonePurgeInterval and nsDS5ReplicaPurgeDelay attributes. Both attributes have values set in seconds, so the purge operations can be initiated almost immediately. Warning Always use the purge intervals to clean out tombstone entries from the database. Never delete tombstone entries manually. | [
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication set --repl-tombstone-purge-interval=43200 --repl-purge-delay=90000"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/tombstones |
Chapter 2. Third-party plugins in Red Hat Developer Hub | Chapter 2. Third-party plugins in Red Hat Developer Hub You can integrate third-party dynamic plugins into Red Hat Developer Hub to enhance its functionality without modifying its source code or rebuilding it. To add these plugins, export them as derived packages. While exporting the plugin package, you must ensure that dependencies are correctly bundled or marked as shared, depending on their relationship to the Developer Hub environment. To integrate a third-party plugin into Developer Hub: First, obtain the plugin's source code. Export the plugin as a dynamic plugin package. See Section 2.1, "Exporting third-party plugins in Red Hat Developer Hub" . Package and publish the dynamic plugin. See Section 2.2, "Packaging and publishing third-party plugins as dynamic plugins" . Install the plugin in the Developer Hub environment. See Section 2.3, "Installing third-party plugins in Red Hat Developer Hub" . 2.1. Exporting third-party plugins in Red Hat Developer Hub To use plugins in Red Hat Developer Hub, you can export plugins as derived dynamic plugin packages. These packages contain the plugin code and dependencies, ready for dynamic plugin integration into Developer Hub. Prerequisites The @janus-idp/cli package is installed. Use the latest version ( @latest tag) for compatibility with the most recent features and fixes. Node.js and NPM is installed and configured. The third-party plugin is compatible with your Red Hat Developer Hub version. For more information, see Version compatibility matrix . The third-party plugin must have a valid package.json file in its root directory, containing all required metadata and dependencies. Backend plugins To ensure compatibility with the dynamic plugin support and enable their use as dynamic plugins, existing backend plugins must be compatible with the new Backstage backend system. Additionally, these plugins must be rebuilt using a dedicated CLI command. The new Backstage backend system entry point (created using createBackendPlugin() or createBackendModule() ) must be exported as the default export from either the main package or an alpha package (if the plugin instance support is still provided using alpha APIs). This doesn't add any additional requirement on top of the standard plugin development guidelines of the plugin instance. The dynamic export mechanism identifies private dependencies and sets the bundleDependencies field in the package.json file. This export mechanism ensures that the dynamic plugin package is published as a self-contained package, with its private dependencies bundled in a private node_modules folder. Certain plugin dependencies require specific handling in the derived packages, such as: Shared dependencies are provided by the RHDH application and listed as peerDependencies in package.json file, not bundled in the dynamic plugin package. For example, by default, all @backstage scoped packages are shared. You can use the --shared-package flag to specify shared dependencies, that are expected to be provided by Red Hat Developer Hub application and not bundled in the dynamic plugin package. To treat a @backstage package as private, use the negation prefix ( ! ). For example, when a plugin depends on the package in @backstage that is not provided by the Red Hat Developer Hub application. Embedded dependencies are bundled into the dynamic plugin package with their dependencies hoisted to the top level. By default, packages with -node or -common suffixes are embedded. You can use the --embed-package flag to specify additional embedded packages. For example, packages from the same workspace that do not follow the default naming convention. The following is an example of exporting a dynamic plugin with shared and embedded packages: Example dynamic plugin export with shared and embedded packages npx @janus-idp/cli@latest export-dynamic-plugin --shared-package '!/@backstage/plugin-notifications/' --embed-package @backstage/plugin-notifications-backend In the example: @backstage/plugin-notifications package is treated as a private dependency and is bundled in the dynamic plugin package, despite being in the @backstage scope. @backstage/plugin-notifications-backend package is marked as an embedded dependency and is bundled in the dynamic plugin package. Front-end plugins Front-end plugins can use scalprum for configuration, which the CLI can generate automatically during the export process. The generated default configuration is logged when running the following command: Example command to log the default configuration npx @janus-idp/cli@latest export-dynamic The following is an example of default scalprum configuration: Default scalprum configuration "scalprum": { "name": "<package_name>", // The Webpack container name matches the NPM package name, with "@" replaced by "." and "/" removed. "exposedModules": { "PluginRoot": "./src/index.ts" // The default module name is "PluginRoot" and doesn't need explicit specification in the app-config.yaml file. } } You can add a scalprum section to the package.json file. For example: Example scalprum customization "scalprum": { "name": "custom-package-name", "exposedModules": { "FooModuleName": "./src/foo.ts", "BarModuleName": "./src/bar.ts" // Define multiple modules here, with each exposed as a separate entry point in the Webpack container. } } Dynamic plugins might need adjustments for Developer Hub needs, such as static JSX for mountpoints or dynamic routes. These changes are optional but might be incompatible with static plugins. To include static JSX, define an additional export and use it as the dynamic plugin's importName . For example: Example static and dynamic plugin export // For a static plugin export const EntityTechdocsContent = () => {...} // For a dynamic plugin export const DynamicEntityTechdocsContent = { element: EntityTechdocsContent, staticJSXContent: ( <TechDocsAddons> <ReportIssue /> </TechDocsAddons> ), }; Procedure Use the package export-dynamic-plugin command from the @janus-idp/cli package to export the plugin: Example command to export a third-party plugin npx @janus-idp/cli@latest package export-dynamic-plugin Ensure that you execute the command in the root directory of the plugin's JavaScript package (containing package.json file). The resulting derived package will be located in the dist-dynamic subfolder. The exported package name consists of the original plugin name with -dynamic appended. Warning The derived dynamic plugin JavaScript packages must not be published to the public NPM registry. For more appropriate packaging options, see Section 2.2, "Packaging and publishing third-party plugins as dynamic plugins" . If you must publish to the NPM registry, use a private registry. 2.2. Packaging and publishing third-party plugins as dynamic plugins After exporting a third-party plugin , you can package the derived package into one of the following supported formats: Open Container Initiative (OCI) image (recommended) TGZ file JavaScript package Important Exported dynamic plugin packages must only be published to private NPM registries. 2.2.1. Creating an OCI image with dynamic packages Prerequisites You have installed podman or docker . You have exported a third-party dynamic plugin package. For more information, see Section 2.1, "Exporting third-party plugins in Red Hat Developer Hub" . Procedure Navigate to the plugin's root directory (not the dist-dynamic directory). Run the following command to package the plugin into an OCI image: Example command to package an exported third-party plugin npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/example/image:v0.0.1 In the command, the --tag argument specifies the image name and tag. Run one of the following commands to push the image to a registry: Example command to push an image to a registry using podman podman push quay.io/example/image:v0.0.1 Example command to push an image to a registry using docker docker push quay.io/example/image:v0.0.1 The output of the package-dynamic-plugins command provides the plugin's path for use in the dynamic-plugin-config.yaml file. 2.2.2. Creating a TGZ file with dynamic packages Prerequisites You have exported a third-party dynamic plugin package. For more information, see Section 2.1, "Exporting third-party plugins in Red Hat Developer Hub" . Procedure Navigate to the dist-dynamic directory. Run the following command to create a tgz archive: Example command to create a tgz archive npm pack You can obtain the integrity hash from the output of the npm pack command by using the --json flag as follows: Example command to obtain the integrity hash of a tgz archive npm pack --json | head -n 10 Host the archive on a web server accessible to your RHDH instance, and reference its URL in the dynamic-plugin-config.yaml file as follows: Example dynamic-plugin-config.yaml file plugins: - package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-<hash> Run the following command to package the plugins: Example command to package a dynamic plugin npm pack --pack-destination ~/test/dynamic-plugins-root/ Tip To create a plugin registry using HTTP server on OpenShift Container Platform, run the following commands: Example commands to build and deploy an HTTP server in OpenShift Container Platform oc project my-rhdh-project oc new-build httpd --name=plugin-registry --binary oc start-build plugin-registry --from-dir=dynamic-plugins-root --wait oc new-app --image-stream=plugin-registry Configure your RHDH to use plugins from the HTTP server by editing the dynamic-plugin-config.yaml file: Example configuration to use packaged plugins in RHDH plugins: - package: http://plugin-registry:8080/backstage-plugin-myplugin-1.9.6.tgz 2.2.3. Creating a JavaScript package with dynamic packages Warning The derived dynamic plugin JavaScript packages must not be published to the public NPM registry. If you must publish to the NPM registry, use a private registry. Prerequisites You have exported a third-party dynamic plugin package. For more information, see Section 2.1, "Exporting third-party plugins in Red Hat Developer Hub" . Procedure Navigate to the dist-dynamic directory. Run the following command to publish the package to your private NPM registry: Example command to publish a plugin package to an NPM registry npm publish --registry <npm_registry_url> Tip You can add the following to your package.json file before running the export command: Example package.json file { "publishConfig": { "registry": "<npm_registry_url>" } } If you modify publishConfig after exporting the dynamic plugin, re-run the export-dynamic-plugin command to ensure the correct configuration is included. 2.3. Installing third-party plugins in Red Hat Developer Hub You can install a third-party plugins in Red Hat Developer Hub without rebuilding the RHDH application. The location of the dynamic-plugin-config.yaml file depends on the deployment method. For more details, refer to Installing dynamic plugins with the Red Hat Developer Hub Operator and Installing dynamic plugins using the Helm chart . Plugins are defined in the plugins array within the dynamic-plugin-config.yaml file. Each plugin is represented as an object with the following properties: package : The plugin's package definition, which can be an OCI image, a TGZ file, a JavaScript package, or a directory path. disabled : A boolean value indicating whether the plugin is enabled or disabled. integrity : The integrity hash of the package, required for TGZ file and JavaScript packages. pluginConfig : The plugin's configuration. For backend plugins, this is optional; for frontend plugins, it is required. The pluginConfig is a fragment of the app-config.yaml file, and any added properties are merged with the RHDH app-config.yaml file. Note You can also load dynamic plugins from another directory, though this is intended for development or testing purposes and is not recommended for production, except for plugins included in the RHDH container image. For more information, see Chapter 3, Enabling plugins added in the RHDH container image . 2.3.1. Loading a plugin packaged as an OCI image Prerequisites The third-party plugin is packaged as a dynamic plugin in an OCI image. For more information about packaging a third-party plugin, see Section 2.2, "Packaging and publishing third-party plugins as dynamic plugins" . Procedure Define the plugin with the oci:// prefix in the following format in dynamic-plugins.yaml file: oci://<image-name>:<tag>!<plugin-name> Example configuration in dynamic-plugins.yaml file plugins: - disabled: false package: oci://quay.io/example/image:v0.0.1!backstage-plugin-myplugin Configure authentication for private registries by setting the REGISTRY_AUTH_FILE environment variable to the path of the registry configuration file. For example, ~/.config/containers/auth.json or ~/.docker/config.json . To perform an integrity check, use the image digest in place of the tag in the dynamic-plugins.yaml file as follows: Example configuration in dynamic-plugins.yaml file plugins: - disabled: false package: oci://quay.io/example/image@sha256:28036abec4dffc714394e4ee433f16a59493db8017795049c831be41c02eb5dc!backstage-plugin-myplugin To apply the changes, restart the RHDH application. 2.3.2. Loading a plugin packaged as a TGZ file Prerequisites The third-party plugin is packaged as a dynamic plugin in a TGZ file. For more information about packaging a third-party plugin, see Section 2.2, "Packaging and publishing third-party plugins as dynamic plugins" . Procedure Specify the archive URL and its integrity hash in the dynamic-plugins.yaml file using the following example: Example configuration in dynamic-plugins.yaml file plugins: - disabled: false package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA== To apply the changes, restart the RHDH application. 2.3.3. Loading a plugin packaged as a JavaScript package Prerequisites The third-party plugin is packaged as a dynamic plugin in a JavaScript package. For more information about packaging a third-party plugin, see Section 2.2, "Packaging and publishing third-party plugins as dynamic plugins" . Procedure Run the following command to obtain the integrity hash from the NPM registry: npm view --registry <registry-url> <npm package>@<version> dist.integrity Specify the package name, version, and its integrity hash in the dynamic-plugins.yaml file as follows: Example configuration in dynamic-plugins.yaml file plugins: - disabled: false package: @example/[email protected] integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA== If you are using a custom NPM registry, create a .npmrc file with the registry URL and authentication details: Example code for .npmrc file registry=<registry-url> //<registry-url>:_authToken=<auth-token> When using OpenShift Container Platform or Kubernetes: Use the Helm chart to add the .npmrc file by creating a secret. For example: Example secret configuration apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token> 1 Replace <release_name> with your Helm release name. This name is a unique identifier for each chart installation in the Kubernetes cluster. For RHDH Helm chart, name the secret using the following format for automatic mounting: <release_name> -dynamic-plugins-npmrc To apply the changes, restart the RHDH application. 2.3.4. Example of installing a third-party plugin in Red Hat Developer Hub This section describes the process for integrating the Todo plugin into your Developer Hub. Obtain the third-party plugin source code : Clone the plugins repository and navigate to the Todo plugin directory: Obtain the third-party plugin source code USD git clone https://github.com/backstage/community-plugins USD cd community-plugins/workspaces/todo USD yarn install Export backend and front-end plugins : Run the following commands to build the backend plugin, adjust package dependencies for dynamic loading, and generate self-contained configuration schema: Export the backend plugin USD cd todo-backend USD npx @janus-idp/cli@latest package export-dynamic-plugin Output of exporting the backend plugin commands Building main package executing yarn build β Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading moving @backstage/backend-common to peerDependencies moving @backstage/backend-openapi-utils to peerDependencies moving @backstage/backend-plugin-api to peerDependencies moving @backstage/catalog-client to peerDependencies moving @backstage/catalog-model to peerDependencies moving @backstage/config to peerDependencies moving @backstage/errors to peerDependencies moving @backstage/integration to peerDependencies moving @backstage/plugin-catalog-node to peerDependencies Installing private dependencies of the main package executing yarn install --no-immutable β Validating private dependencies Validating plugin entry points Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo-backend/dist-dynamic/dist/configSchema.json You can run the following commands to set default dynamic UI configurations, create front-end plugin assets, and to generate a configuration schema for a front-end plugin: Export the front-end plugin USD cd ../todo USD npx @janus-idp/cli@latest package export-dynamic-plugin Output of exporting the front-end plugin commands No scalprum config. Using default dynamic UI configuration: { "name": "backstage-community.plugin-todo", "exposedModules": { "PluginRoot": "./src/index.ts" } } If you wish to change the defaults, add "scalprum" configuration to plugin "package.json" file, or use the '--scalprum-config' option to specify an external config. Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading Generating dynamic frontend plugin assets in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum 263.46 kB dist-scalprum/static/1417.d5271413.chunk.js ... ... ... 250 B dist-scalprum/static/react-syntax-highlighter_languages_highlight_plaintext.0b7d6592.chunk.js Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum/configSchema.json Package and publish a third-party plugin : Run the following commands to navigate to the workspace directory and package the dynamic plugin to build the OCI image: Build an OCI image USD cd ../.. USD npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/user/backstage-community-plugin-todo:v0.1.1 Output of building an OCI image commands executing podman --version β Using existing 'dist-dynamic' directory at plugins/todo Using existing 'dist-dynamic' directory at plugins/todo-backend Copying 'plugins/todo/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo No plugin configuration found at undefined create this file as needed if this plugin requires configuration Copying 'plugins/todo-backend/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo-backend-dynamic No plugin configuration found at undefined create this file as needed if this plugin requires configuration Writing plugin registry metadata to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/index.json' Creating image using podman executing echo "from scratch COPY . . " | podman build --annotation com.redhat.rhdh.plugins='[{"backstage-community-plugin-todo":{"name":"@backstage-community/plugin-todo","version":"0.2.40","description":"A Backstage plugin that lets you browse TODO comments in your source code","backstage":{"role":"frontend-plugin","pluginId":"todo","pluginPackages":["@backstage-community/plugin-todo","@backstage-community/plugin-todo-backend"]},"homepage":"https://backstage.io","repository":{"type":"git","url":"https://github.com/backstage/community-plugins","directory":"workspaces/todo/plugins/todo"},"license":"Apache-2.0"}},{"backstage-community-plugin-todo-backend-dynamic":{"name":"@backstage-community/plugin-todo-backend","version":"0.3.19","description":"A Backstage backend plugin that lets you browse TODO comments in your source code","backstage":{"role":"backend-plugin","pluginId":"todo","pluginPackages":["@backstage-community/plugin-todo","@backstage-community/plugin-todo-backend"]},"homepage":"https://backstage.io","repository":{"type":"git","url":"https://github.com/backstage/community-plugins","directory":"workspaces/todo/plugins/todo-backend"},"license":"Apache-2.0"}}]' -t 'quay.io/user/backstage-community-plugin-todo:v0.1.1' -f - . β Successfully built image quay.io/user/backstage-community-plugin-todo:v0.1.1 with following plugins: backstage-community-plugin-todo backstage-community-plugin-todo-backend-dynamic Here is an example dynamic-plugins.yaml for these plugins: plugins: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo disabled: false - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false Push the OCI image to a container registry: USD podman push quay.io/user/backstage-community-plugin-todo:v0.1.1 Output of pushing the OCI image command Getting image source signatures Copying blob sha256:86a372c456ae6a7a305cd464d194aaf03660932efd53691998ab3403f87cacb5 Copying config sha256:3b7f074856ecfbba95a77fa87cfad341e8a30c7069447de8144aea0edfcb603e Writing manifest to image destination Install and configure the third-party plugin : Add the following plugin definitions to your dynamic-plugins.yaml file: Plugin definitions in dynamic-plugins.yaml file packages: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo pluginConfig: dynamicPlugins: frontend: backstage-community.plugin-todo: mountPoints: - mountPoint: entity.page.todo/cards importName: EntityTodoContent entityTabs: - path: /todo title: Todo mountPoint: entity.page.todo - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false | [
"npx @janus-idp/cli@latest export-dynamic-plugin --shared-package '!/@backstage/plugin-notifications/' --embed-package @backstage/plugin-notifications-backend",
"npx @janus-idp/cli@latest export-dynamic",
"\"scalprum\": { \"name\": \"<package_name>\", // The Webpack container name matches the NPM package name, with \"@\" replaced by \".\" and \"/\" removed. \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" // The default module name is \"PluginRoot\" and doesn't need explicit specification in the app-config.yaml file. } }",
"\"scalprum\": { \"name\": \"custom-package-name\", \"exposedModules\": { \"FooModuleName\": \"./src/foo.ts\", \"BarModuleName\": \"./src/bar.ts\" // Define multiple modules here, with each exposed as a separate entry point in the Webpack container. } }",
"// For a static plugin export const EntityTechdocsContent = () => {...} // For a dynamic plugin export const DynamicEntityTechdocsContent = { element: EntityTechdocsContent, staticJSXContent: ( <TechDocsAddons> <ReportIssue /> </TechDocsAddons> ), };",
"npx @janus-idp/cli@latest package export-dynamic-plugin",
"npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/example/image:v0.0.1",
"push quay.io/example/image:v0.0.1",
"docker push quay.io/example/image:v0.0.1",
"npm pack",
"npm pack --json | head -n 10",
"plugins: - package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-<hash>",
"npm pack --pack-destination ~/test/dynamic-plugins-root/",
"project my-rhdh-project new-build httpd --name=plugin-registry --binary start-build plugin-registry --from-dir=dynamic-plugins-root --wait new-app --image-stream=plugin-registry",
"plugins: - package: http://plugin-registry:8080/backstage-plugin-myplugin-1.9.6.tgz",
"npm publish --registry <npm_registry_url>",
"{ \"publishConfig\": { \"registry\": \"<npm_registry_url>\" } }",
"plugins: - disabled: false package: oci://quay.io/example/image:v0.0.1!backstage-plugin-myplugin",
"plugins: - disabled: false package: oci://quay.io/example/image@sha256:28036abec4dffc714394e4ee433f16a59493db8017795049c831be41c02eb5dc!backstage-plugin-myplugin",
"plugins: - disabled: false package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==",
"npm view --registry <registry-url> <npm package>@<version> dist.integrity",
"plugins: - disabled: false package: @example/[email protected] integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==",
"registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"git clone https://github.com/backstage/community-plugins cd community-plugins/workspaces/todo yarn install",
"cd todo-backend npx @janus-idp/cli@latest package export-dynamic-plugin",
"Building main package executing yarn build β Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading moving @backstage/backend-common to peerDependencies moving @backstage/backend-openapi-utils to peerDependencies moving @backstage/backend-plugin-api to peerDependencies moving @backstage/catalog-client to peerDependencies moving @backstage/catalog-model to peerDependencies moving @backstage/config to peerDependencies moving @backstage/errors to peerDependencies moving @backstage/integration to peerDependencies moving @backstage/plugin-catalog-node to peerDependencies Installing private dependencies of the main package executing yarn install --no-immutable β Validating private dependencies Validating plugin entry points Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo-backend/dist-dynamic/dist/configSchema.json",
"cd ../todo npx @janus-idp/cli@latest package export-dynamic-plugin",
"No scalprum config. Using default dynamic UI configuration: { \"name\": \"backstage-community.plugin-todo\", \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" } } If you wish to change the defaults, add \"scalprum\" configuration to plugin \"package.json\" file, or use the '--scalprum-config' option to specify an external config. Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading Generating dynamic frontend plugin assets in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum 263.46 kB dist-scalprum/static/1417.d5271413.chunk.js 250 B dist-scalprum/static/react-syntax-highlighter_languages_highlight_plaintext.0b7d6592.chunk.js Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum/configSchema.json",
"cd ../.. npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/user/backstage-community-plugin-todo:v0.1.1",
"executing podman --version β Using existing 'dist-dynamic' directory at plugins/todo Using existing 'dist-dynamic' directory at plugins/todo-backend Copying 'plugins/todo/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo No plugin configuration found at undefined create this file as needed if this plugin requires configuration Copying 'plugins/todo-backend/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo-backend-dynamic No plugin configuration found at undefined create this file as needed if this plugin requires configuration Writing plugin registry metadata to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/index.json' Creating image using podman executing echo \"from scratch COPY . . \" | podman build --annotation com.redhat.rhdh.plugins='[{\"backstage-community-plugin-todo\":{\"name\":\"@backstage-community/plugin-todo\",\"version\":\"0.2.40\",\"description\":\"A Backstage plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"frontend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo\"},\"license\":\"Apache-2.0\"}},{\"backstage-community-plugin-todo-backend-dynamic\":{\"name\":\"@backstage-community/plugin-todo-backend\",\"version\":\"0.3.19\",\"description\":\"A Backstage backend plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"backend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo-backend\"},\"license\":\"Apache-2.0\"}}]' -t 'quay.io/user/backstage-community-plugin-todo:v0.1.1' -f - . β Successfully built image quay.io/user/backstage-community-plugin-todo:v0.1.1 with following plugins: backstage-community-plugin-todo backstage-community-plugin-todo-backend-dynamic Here is an example dynamic-plugins.yaml for these plugins: plugins: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo disabled: false - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false",
"podman push quay.io/user/backstage-community-plugin-todo:v0.1.1",
"Getting image source signatures Copying blob sha256:86a372c456ae6a7a305cd464d194aaf03660932efd53691998ab3403f87cacb5 Copying config sha256:3b7f074856ecfbba95a77fa87cfad341e8a30c7069447de8144aea0edfcb603e Writing manifest to image destination",
"packages: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo pluginConfig: dynamicPlugins: frontend: backstage-community.plugin-todo: mountPoints: - mountPoint: entity.page.todo/cards importName: EntityTodoContent entityTabs: - path: /todo title: Todo mountPoint: entity.page.todo - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_and_viewing_plugins_in_red_hat_developer_hub/assembly-third-party-plugins |
Chapter 3. Registering and connecting systems to Red Hat Insights to execute tasks | Chapter 3. Registering and connecting systems to Red Hat Insights to execute tasks To work with Red Hat Insights, you need to register systems to Insights, and enable system communication with Insights. In addition to communicating with Insights, you need to enable and install dependencies on Satellite 6.11+, Remote Host Configuration (rhc), rhc-worker-playbook and ansible , so that you can use task services, and other services in the Automation Toolkit. Additional resources Red Hat Insights data and application security | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/register-connect-for-tasks_overview-tasks |
Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference | Chapter 8. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Description type Must be scram-sha-512 . string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaListenerAuthenticationScramSha512-reference |
Chapter 19. Creating cluster resources that are active on multiple nodes (cloned resources) | Chapter 19. Creating cluster resources that are active on multiple nodes (cloned resources) You can clone a cluster resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group. Note Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time. 19.1. Creating and removing a cloned resource You can create a resource and a clone of that resource at the same time. To create a resource and clone of the resource with the following single command. RHEL 8.4 and later: RHEL 8.3 and earlier: By default, the name of the clone will be resource_id -clone . As of RHEL 8.4, you can set a custom name for the clone by specifying a value for the clone_id option. You cannot create a resource group and a clone of that resource group in a single command. Alternately, you can create a clone of a previously-created resource or resource group with the following command. RHEL 8.4 and later: RHEL 8.3 and earlier: By default, the name of the clone will be resource_id -clone or group_name -clone . As of RHEL 8.4, you can set a custom name for the clone by specifying a value for the clone_id option. Note You need to configure resource configuration changes on one node only. Note When configuring constraints, always use the name of the group or clone. When you create a clone of a resource, by default the clone takes on the name of the resource with -clone appended to the name. The following command creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone . Note When you create a resource or resource group clone that will be ordered after another clone, you should almost always set the interleave=true option. This ensures that copies of the dependent clone can stop or start when the clone it depends on has stopped or started on the same node. If you do not set this option, if a cloned resource B depends on a cloned resource A and a node leaves the cluster, when the node returns to the cluster and resource A starts on that node, then all of the copies of resource B on all of the nodes will restart. This is because when a dependent cloned resource does not have the interleave option set, all instances of that resource depend on any running instance of the resource it depends on. Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself. The following table describes the options you can specify for a cloned resource. Table 19.1. Resource Clone Options Field Description priority, target-role, is-managed Options inherited from resource that is being cloned, as described in the "Resource Meta Options" table in Configuring resource meta options . clone-max How many copies of the resource to start. Defaults to the number of nodes in the cluster. clone-node-max How many copies of the resource can be started on a single node; the default value is 1 . notify When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false , true . The default value is false . globally-unique Does each copy of the clone perform a different function? Allowed values: false , true If the value of this option is false , these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine. If the value of this option is true , a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false . ordered Should the copies be started in series (instead of in parallel). Allowed values: false , true . The default value is false . interleave Changes the behavior of ordering constraints (between clones) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false , true . The default value is false . clone-min If a value is specified, any clones which are ordered after this clone will not be able to start until the specified number of instances of the original clone are running, even if the interleave option is set to true . To achieve a stable allocation pattern, clones are slightly sticky by default, which indicates that they have a slight preference for staying on the node where they are running. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster. For information about setting the resource-stickiness resource meta-option, see Configuring resource meta options . 19.2. Configuring clone resource constraints In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently to those for regular resources except that the clone's id must be used. The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1 . Ordering constraints behave slightly differently for clones. In the example below, because the interleave clone option is left to default as false , no instance of webfarm-stats will start until all instances of webfarm-clone that need to be started have done so. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself. Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally. The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone . 19.3. Promotable clone resources Promotable clone resources are clone resources with the promotable meta attribute set to true . They allow the instances to be in one of two operating modes; these are called master and slave . The names of the modes do not have specific meanings, except for the limitation that when an instance is started, it must come up in the Slave state. 19.3.1. Creating a promotable clone resource You can create a resource as a promotable clone with the following single command. RHEL 8.4 and later: RHEL 8.3 and earlier: By default, the name of the promotable clone will be resource_id -clone . As of RHEL 8.4, you can set a custom name for the clone by specifying a value for the clone_id option. Alternately, you can create a promotable resource from a previously-created resource or resource group with the following command. RHEL 8.4 and later: RHEL 8.3 and earlier: By default, the name of the promotable clone will be resource_id -clone or group_name -clone . As of RHEL 8.4, you can set a custom name for the clone by specifying a value for the clone_id option. The following table describes the extra clone options you can specify for a promotable resource. Table 19.2. Extra Clone Options Available for Promotable Clones Field Description promoted-max How many copies of the resource can be promoted; default 1. promoted-node-max How many copies of the resource can be promoted on a single node; default 1. 19.3.2. Configuring promotable resource constraints In most cases, a promotable resource will have a single copy on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently than those for regular resources. You can create a colocation constraint which specifies whether the resources are operating in a master or slave role. The following command creates a resource colocation constraint. For information about colocation constraints, see Colocating cluster resources . When configuring an ordering constraint that includes promotable resources, one of the actions that you can specify for the resources is promote , indicating that the resource be promoted from slave role to master role. Additionally, you can specify an action of demote , indicated that the resource be demoted from master role to slave role. The command for configuring an order constraint is as follows. For information about resource order constraints, see Determining the order in which cluster resources are run . 19.4. Demoting a promoted resource on failure As of RHEL 8.3, you can configure a promotable resource so that when a promote or monitor action fails for that resource, or the partition in which the resource is running loses quorum, the resource will be demoted but will not be fully stopped. This can prevent the need for manual intervention in situations where fully stopping the resource would require it. To configure a promotable resource to be demoted when a promote action fails, set the on-fail operation meta option to demote , as in the following example. To configure a promotable resource to be demoted when a monitor action fails, set interval to a nonzero value, set the on-fail operation meta option to demote , and set role to Master , as in the following example. To configure a cluster so that when a cluster partition loses quorum any promoted resources will be demoted but left running and all other resources will be stopped, set the no-quorum-policy cluster property to demote Setting the on-fail meta-attribute to demote for an operation does not affect how promotion of a resource is determined. If the affected node still has the highest promotion score, it will be selected to be promoted again. | [
"pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta resource meta options ] clone [ clone_id ] [ clone options ]",
"pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta resource meta options ] clone [ clone options ]",
"pcs resource clone resource_id | group_id [ clone_id ][ clone options ]",
"pcs resource clone resource_id | group_id [ clone options ]",
"pcs resource create webfarm apache clone",
"pcs resource unclone resource_id | clone_id | group_name",
"pcs constraint location webfarm-clone prefers node1",
"pcs constraint order start webfarm-clone then webfarm-stats",
"pcs constraint colocation add webfarm-stats with webfarm-clone",
"pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] promotable [ clone_id ] [ clone options ]",
"pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] promotable [ clone options ]",
"pcs resource promotable resource_id [ clone_id ] [ clone options ]",
"pcs resource promotable resource_id [ clone options ]",
"pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]",
"pcs constraint order [ action ] resource_id then [ action ] resource_id [ options ]",
"pcs resource op add my-rsc promote on-fail=\"demote\"",
"pcs resource op add my-rsc monitor interval=\"10s\" on-fail=\"demote\" role=\"Master\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_creating-multinode-resources-configuring-and-managing-high-availability-clusters |
Chapter 2. Working with ML2/OVN | Chapter 2. Working with ML2/OVN Red Hat OpenStack Services on OpenShift (RHOSO) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSO ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver. 2.1. Open Virtual Network (OVN) Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and container environments. OVN is used as the mechanism driver for the Red Hat OpenStack Services on OpenShift (RHOSO) Networking service (neutron). Sometimes called open source virtual networking for Open vSwitch (OVS), OVN complements the existing capabilities of OVS to add native support for logical network abstractions, such as logical L2 and L3 overlays, security groups and services such as DHCP. A physical network comprises physical wires, switches, and routers. A virtual network extends a physical network into a hypervisor or container platform, bridging VMs or containers into the physical network. An OVN logical network is a network implemented in software that is insulated from physical networks by tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to overlap with those used on physical networks without causing conflicts. Logical network topologies can be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs that are part of a logical network can migrate from one physical machine to another without network disruption. The encapsulation layer prevents VMs and containers connected to a logical network from communicating with nodes on physical networks. For clustering VMs and containers, this can be acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of several components: Cloud Management System (CMS) integrates OVN into a physical network by managing the OVN logical network elements and connecting the OVN logical network infrastructure to physical network elements. Some examples include OpenStack and OpenShift. OVN databases stores data representing the OVN logical and physical networks. Hypervisors run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual machine. Gateways extends a tunnel-based OVN logical network into a physical network by forwarding packets between tunnels and the physical network infrastructure. 2.2. List of components in the RHOSO OVN architecture Open Virtual Network (OVN) provides networking services for Red Hat OpenStack Services on OpenShift (RHOSO) environments. As illustrated in Figure 2.1, the OVN architecture consists of the following components and services: Networking service This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and load balancer details, among others. Compute node This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances. ML2 plug-in with OVN mechanism driver The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the RHOSO control plane on OpenShift worker nodes. OVN northbound (NB) database ( ovn-nb ) This database stores the logical OVN networking configuration from the OVN ML2 plugin. It typically runs on the RHOSO control plane and listens on TCP port 6641 . The northbound database ( OVN_Northbound ) serves as the interface between OVN and a cloud management system such as RHOSO. RHOSO produces the contents of the northbound database. The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSO Networking service (neutron) object is represented in a table in the northbound database. OVN northbound service ( ovn-northd ) This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the RHOSO control plane. OVN southbound (SB) database ( ovn-sb ) This database stores the converted logical data path flows. It typically runs on the RHOSO control plane and listens on TCP port 6642 . The southbound database ( OVN_Southbound ) holds the logical and physical configuration state for OVN system to support virtual network abstraction. The ovn-controller uses the information in this database to configure OVS to satisfy Networking service (neutron) requirements. Note The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema , and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema . OVS database server (OVSDB) Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host the OVS database conf.db . OVN controller ( ovn-controller ) This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes. OVN metadata agent ( ovn-metadata-agent ) This agent creates the haproxy instances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes. The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>). OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket. Figure 2.1. OVN architecture in a RHOSO environment 2.3. Layer 3 high availability with OVN OVN supports Layer 3 high availability (L3 HA) without any special configuration in Red Hat OpenStack Services on OpenShift (RHOSO) environments, Note When you create a router, do not use --ha option because OVN routers are highly available by default. Openstack router create commands that include the --ha option fail. OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller . Note L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck. BFD monitoring OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the GENEVE tunnels established from node to node. Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements. Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes. Note External network failures are not detected as would happen with an ML2-OVS configuration. L3 HA for OVN supports the following failure modes: The gateway node becomes disconnected from the network (tunneling interface). ovs-vswitchd stops ( ovs-switchd is responsible for BFD signaling) ovn-controller stops ( ovn-controller removes itself as a registered node). Note This BFD monitoring mechanism only works for link failures, not for routing failures. 2.4. Active-active clustered database service model On Red Hat OpenStack Services on OpenShift (RHOSO) environments, OVN uses a clustered database service model that applies the Raft consensus algorithm to enhance performance of OVS database protocol traffic and provide faster, more reliable failover handling. A clustered database operates on a cluster of at least three database servers on different hosts. Servers use the Raft consensus algorithm to synchronize writes and share network traffic continuously across the cluster. The cluster elects one server as the leader. All servers in the cluster can handle database read operations, which mitigates potential bottlenecks on the control plane. Write operations are handled by the cluster leader. If a server fails, a new cluster leader is elected and the traffic is redistributed among the remaining operational servers. The clustered database service model handles failovers more efficiently than the pacemaker-based model did. This mitigates related downtime and complications that can occur with longer failover times. The leader election process requires a majority, so the fault tolerance capacity is limited by the highest odd number in the cluster. For example, a three-server cluster continues to operate if one server fails. A five-server cluster tolerates up to two failures. Increasing the number of servers to an even number does not increase fault tolerance. For example, a four-server cluster cannot tolerate more failures than a three-server cluster. Most RHOSO deployments use three servers. Clusters larger than five servers also work, with every two added servers allowing the cluster to tolerate an additional failure, but write performance decreases. 2.5. SR-IOV with ML2/OVN and native OVN DHCP You can deploy a custom node set to use SR-IOV in an ML2/OVN deployment with native OVN DHCP in Red Hat OpenStack Services on OpenShift (RHOSO) environments. Limitations The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release. All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports. North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_networking_services/assembly_work-with-ovn_rhoso-cfgnet |
Chapter 3. Getting support | Chapter 3. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/windows-containers-support |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/deploying_jboss_eap_on_amazon_web_services/making-open-source-more-inclusive |
Using Two-Factor Authentication | Using Two-Factor Authentication Red Hat Customer Portal 1 Using two-factor authentication to access your Red Hat user account Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/using_two-factor_authentication/index |
Chapter 11. Changing the MTU for the cluster network | Chapter 11. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN network plugins. 11.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not normally need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network plugins. 11.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 11.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin: OVN-Kubernetes : 100 bytes OpenShift SDN : 50 bytes If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 11.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 11.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. For OVN-Kubernetes, the overhead is 100 bytes and for OpenShift SDN the overhead is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 11.2. Changing the cluster MTU As a cluster administrator, you can change the maximum transmission unit (MTU) for your cluster. The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update rolls out. The following procedure describes how to change the cluster MTU by using either machine configs, DHCP, or an ISO. If you use the DHCP or ISO approach, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You identified the target MTU for your cluster. The correct MTU varies depending on the network plugin that your cluster uses: OVN-Kubernetes : The cluster MTU must be set to 100 less than the lowest hardware MTU value in your cluster. OpenShift SDN : The cluster MTU must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To increase or decrease the MTU for the cluster network complete the following procedure. To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.12.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.12.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value for <machine_to> and for OVN-Kubernetes must be 100 less and for OpenShift SDN must be 50 less. <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. If the machine config is successfully deployed, the output contains the /etc/NetworkManager/conf.d/99-<interface>-mtu.conf file path and the ExecStart=/usr/local/bin/mtu-migration.sh line. To finalize the MTU migration, enter one of the following commands: If you are using the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . If you are using the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . After finalizing the MTU migration, each MCP node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification You can verify that a node in your cluster uses an MTU that you specified in the procedure. To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node. To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 11.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli | [
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.12.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.12.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get mcp",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/changing-cluster-network-mtu |
Chapter 16. Replacing storage devices | Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found.",
"oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc get pv oc delete pv <failed-pv-name>",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/<node name> chroot /host",
"sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc get -n openshift-storage pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/<node name> chroot /host",
"lsblk"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices |
6.2. Entitlement | 6.2. Entitlement subscription-manager component If multiple repositories are enabled, subscription-manager installs product certificates from all repositories instead of installing the product certificate only from the repository from which the RPM package was installed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/entitlement |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 7.5 is distributed with the kernel version 3.10.0-862, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM Z [4] Support for Architectures in the kernel-alt Packages Red Hat Enterprise Linux 7.5 is distributed with the kernel-alt packages, which include kernel version 4.14. This kernel version provides support for the following architectures: 64-bit ARM IBM POWER9 (little endian) [5] IBM Z The following table provides an overview of architectures supported by the two kernel versions available in Red Hat Enterprise Linux 7.5: Table 2.1. Architectures Supported in Red Hat Enterprise Linux 7.5 Architecture Kernel version 3.10 Kernel version 4.14 64-bit AMD and Intel yes no 64-bit ARM no yes IBM POWER7 (big endian) yes no IBM POWER8 (big endian) yes no IBM POWER8 (little endian) yes no IBM POWER9 (little endian) no yes IBM z System yes [a] yes (Structure A) [a] The 3.10 kernel version does not support KVM virtualization and containers on IBM Z. Both of these features are supported on the 4.14 kernel on IBM Z - this offerring is also referred to as Structure A. For more information, see Chapter 19, Red Hat Enterprise Linux 7.5 for ARM and Chapter 20, Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) . [1] Note that the Red Hat Enterprise Linux 7.5 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.5 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.5 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.5 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.5 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] Red Hat Enterprise Linux 7.5 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package. [5] Red Hat Enterprise Linux 7.5 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.5 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/chap-Red_Hat_Enterprise_Linux-7.5_Release_Notes-Architectures |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_from_camel_k_to_red_hat_build_of_apache_camel_for_quarkus/pr01 |
1.4. DM Multipath Components | 1.4. DM Multipath Components Table 1.1, "DM Multipath Components" . describes the components of DM Multipath. Table 1.1. DM Multipath Components Component Description dm_multipath kernel module Reroutes I/O and supports failover for paths and path groups. mpathconf utility Configures and enables device mapper multipathing. multipath command Lists and configures multipath devices. Normally started with /etc/rc.sysinit , it can also be started by a udev program whenever a block device is added. multipathd daemon Monitors paths; as paths fail and come back, it may initiate path group switches. Allows interactive changes to multipath devices. The daemon must be restarted following any changes to the /etc/multipath.conf file. kpartx command Creates device mapper devices for the partitions on a device. It is necessary to use this command for DOS-based partitions with DM Multipath. The kpartx command is provided in its own package, but the device-mapper-multipath package depends on it. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/mpio_components |
Chapter 3. Binding [v1] | Chapter 3. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object Required target 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata target object ObjectReference contains enough information to let you inspect or modify the referred object. 3.1.1. .target Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/bindings POST : create a Binding /api/v1/namespaces/{namespace}/pods/{name}/binding POST : create binding of a Pod 3.2.1. /api/v1/namespaces/{namespace}/bindings Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a Binding Table 3.2. Body parameters Parameter Type Description body Binding schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty 3.2.2. /api/v1/namespaces/{namespace}/pods/{name}/binding Table 3.4. Global path parameters Parameter Type Description name string name of the Binding Table 3.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create binding of a Pod Table 3.6. Body parameters Parameter Type Description body Binding schema Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/binding-v1 |
Chapter 26. Configuring a static route | Chapter 26. Configuring a static route Routing ensures that you can send and receive traffic between mutually-connected networks. In larger environments, administrators typically configure services so that routers can dynamically learn about other routers. In smaller environments, administrators often configure static routes to ensure that traffic can reach from one network to the . You need static routes to achieve a functioning communication among multiple networks if all of these conditions apply: The traffic has to pass multiple networks. The exclusive traffic flow through the default gateways is not sufficient. The Example of a network that requires static routes section describes scenarios and how the traffic flows between different networks when you do not configure static routes. 26.1. Example of a network that requires static routes You require static routes in this example because not all IP networks are directly connected through one router. Without the static routes, some networks cannot communicate with each other. Additionally, traffic from some networks flows only in one direction. Note The network topology in this example is artificial and only used to explain the concept of static routing. It is not a recommended topology in production environments. For a functioning communication among all networks in this example, configure a static route to Raleigh ( 198.51.100.0/24 ) with the hop Router 2 ( 203.0.113.10 ). The IP address of the hop is the one of Router 2 in the data center network ( 203.0.113.0/24 ). You can configure the static route as follows: For a simplified configuration, set this static route only on Router 1. However, this increases the traffic on Router 1 because hosts from the data center ( 203.0.113.0/24 ) send traffic to Raleigh ( 198.51.100.0/24 ) always through Router 1 to Router 2. For a more complex configuration, configure this static route on all hosts in the data center ( 203.0.113.0/24 ). All hosts in this subnet then send traffic directly to Router 2 ( 203.0.113.10 ) that is closer to Raleigh ( 198.51.100.0/24 ). For more details between which networks traffic flows or not, see the explanations below the diagram. In case that the required static routes are not configured , the following are the situations in which the communication works and when it does not: Hosts in the Berlin network ( 192.0.2.0/24 ): Can communicate with other hosts in the same subnet because they are directly connected. Can communicate with the internet because Router 1 is in the Berlin network ( 192.0.2.0/24 ) and has a default gateway, which leads to the internet. Can communicate with the data center network ( 203.0.113.0/24 ) because Router 1 has interfaces in both the Berlin ( 192.0.2.0/24 ) and the data center ( 203.0.113.0/24 ) networks. Cannot communicate with the Raleigh network ( 198.51.100.0/24 ) because Router 1 has no interface in this network. Therefore, Router 1 sends the traffic to its own default gateway (internet). Hosts in the data center network ( 203.0.113.0/24 ): Can communicate with other hosts in the same subnet because they are directly connected. Can communicate with the internet because they have their default gateway set to Router 1, and Router 1 has interfaces in both networks, the data center ( 203.0.113.0/24 ) and to the internet. Can communicate with the Berlin network ( 192.0.2.0/24 ) because they have their default gateway set to Router 1, and Router 1 has interfaces in both the data center ( 203.0.113.0/24 ) and the Berlin ( 192.0.2.0/24 ) networks. Cannot communicate with the Raleigh network ( 198.51.100.0/24 ) because the data center network has no interface in this network. Therefore, hosts in the data center ( 203.0.113.0/24 ) send traffic to their default gateway (Router 1). Router 1 also has no interface in the Raleigh network ( 198.51.100.0/24 ) and, as a result, Router 1 sends this traffic to its own default gateway (internet). Hosts in the Raleigh network ( 198.51.100.0/24 ): Can communicate with other hosts in the same subnet because they are directly connected. Cannot communicate with hosts on the internet. Router 2 sends the traffic to Router 1 because of the default gateway settings. The actual behavior of Router 1 depends on the reverse path filter ( rp_filter ) system control ( sysctl ) setting. By default on RHEL, Router 1 drops the outgoing traffic instead of routing it to the internet. However, regardless of the configured behavior, communication is not possible without the static route. Cannot communicate with the data center network ( 203.0.113.0/24 ). The outgoing traffic reaches the destination through Router 2 because of the default gateway setting. However, replies to packets do not reach the sender because hosts in the data center network ( 203.0.113.0/24 ) send replies to their default gateway (Router 1). Router 1 then sends the traffic to the internet. Cannot communicate with the Berlin network ( 192.0.2.0/24 ). Router 2 sends the traffic to Router 1 because of the default gateway settings. The actual behavior of Router 1 depends on the rp_filter sysctl setting. By default on RHEL, Router 1 drops the outgoing traffic instead of sending it to the Berlin network ( 192.0.2.0/24 ). However, regardless of the configured behavior, communication is not possible without the static route. Note In addition to configuring the static routes, you must enable IP forwarding on both routers. Additional resources Why cannot a server be pinged if net.ipv4.conf.all.rp_filter is set on the server? (Red Hat Knowledgebase) Enabling IP forwarding (Red Hat Knowledgebase) 26.2. How to use the nmcli utility to configure a static route To configure a static route, use the nmcli utility with the following syntax: The command supports the following route attributes: cwnd= n : Sets the congestion window (CWND) size, defined in the number of packets. lock-cwnd=true|false : Defines whether or not the kernel can update the CWND value. lock-mtu=true|false : Defines whether or not the kernel can update the MTU to path MTU discovery. lock-window=true|false : Defines whether or not the kernel can update the maximum window size for TCP packets. mtu= <mtu_value> : Sets the maximum transfer unit (MTU) to use along the path to the destination. onlink=true|false : Defines whether the hop is directly attached to this link even if it does not match any interface prefix. scope= <scope> : For an IPv4 route, this attribute sets the scope of the destinations covered by the route prefix. Set the value as an integer (0-255). src= <source_address> : Sets the source address to prefer when sending traffic to the destinations covered by the route prefix. table= <table_id> : Sets the ID of the table the route should be added to. If you omit this parameter, NetworkManager uses the main table. tos= <type_of_service_key> : Sets the type of service (TOS) key. Set the value as an integer (0-255). type= <route_type> : Sets the route type. NetworkManager supports the unicast , local , blackhole , unreachable , prohibit , and throw route types. The default is unicast . window= <window_size> : Sets the maximal window size for TCP to advertise to these destinations, measured in bytes. Important If you use the ipv4.routes option without a preceding + sign, nmcli overrides all current settings of this parameter. To create an additional route, enter: To remove a specific route, enter: 26.3. Configuring a static route by using nmcli You can add a static route to an existing NetworkManager connection profile using the nmcli connection modify command. The procedure below configures the following routes: An IPv4 route to the remote 198.51.100.0/24 network. The corresponding gateway with the IP address 192.0.2.10 is reachable through the LAN connection profile. An IPv6 route to the remote 2001:db8:2::/64 network. The corresponding gateway with the IP address 2001:db8:1::10 is reachable through the LAN connection profile. Prerequisites The LAN connection profile exists and it configures this host to be in the same IP subnet as the gateways. Procedure Add the static IPv4 route to the LAN connection profile: To set multiple routes in one step, pass the individual routes comma-separated to the command: Add the static IPv6 route to the LAN connection profile: Re-activate the connection: Verification Display the IPv4 routes: Display the IPv6 routes: 26.4. Configuring a static route by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to configure static routes on a host without a graphical interface. For example, the procedure below adds a route to the 192.0.2.0/24 network that uses the gateway running on 198.51.100.1 , which is reachable through an existing connection profile. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites The network is configured. The gateway for the static route must be directly reachable on the interface. If the user is logged in on a physical console, user permissions are sufficient. Otherwise, the command requires root permissions. Procedure Start nmtui : Select Edit a connection , and press Enter . Select the connection profile through which you can reach the hop to the destination network, and press Enter . Depending on whether it is an IPv4 or IPv6 route, press the Show button to the protocol's configuration area. Press the Edit button to Routing . This opens a new window where you configure static routes: Press the Add button and fill in: The destination network, including the prefix in Classless Inter-Domain Routing (CIDR) format The IP address of the hop A metric value, if you add multiple routes to the same network and want to prioritize the routes by efficiency Repeat the step for every route you want to add and that is reachable through this connection profile. Press the OK button to return to the window with the connection settings. Figure 26.1. Example of a static route without metric Press the OK button to return to the nmtui main menu. Select Activate a connection and press Enter . Select the connection profile that you edited, and press Enter twice to deactivate and activate it again. Important Skip this step if you run nmtui over a remote connection, such as SSH, that uses the connection profile you want to reactivate. In this case, if you deactivate it in nmtui , the connection is terminated and, consequently, you cannot activate it again. To avoid this problem, use the nmcli connection <connection_profile> up command to reactivate the connection in the mentioned scenario. Press the Back button to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Verify that the route is active: 26.5. Configuring a static route by using control-center You can use control-center in GNOME to add a static route to the configuration of a network connection. The procedure below configures the following routes: An IPv4 route to the remote 198.51.100.0/24 network. The corresponding gateway has the IP address 192.0.2.10 . An IPv6 route to the remote 2001:db8:2::/64 network. The corresponding gateway has the IP address 2001:db8:1::10 . Prerequisites The network is configured. This host is in the same IP subnet as the gateways. The network configuration of the connection is opened in the control-center application. See Configuring an Ethernet connection by using nm-connection-editor . Procedure On the IPv4 tab: Optional: Disable automatic routes by clicking the On button in the Routes section of the IPv4 tab to use only static routes. If automatic routes are enabled, Red Hat Enterprise Linux uses static routes and routes received from a DHCP server. Enter the address, netmask, gateway, and optionally a metric value of the IPv4 route: On the IPv6 tab: Optional: Disable automatic routes by clicking the On button i the Routes section of the IPv4 tab to use only static routes. Enter the address, netmask, gateway, and optionally a metric value of the IPv6 route: Click Apply . Back in the Network window, disable and re-enable the connection by switching the button for the connection to Off and back to On for changes to take effect. Warning Restarting the connection briefly disrupts connectivity on that interface. Verification Display the IPv4 routes: Display the IPv6 routes: 26.6. Configuring a static route by using nm-connection-editor You can use the nm-connection-editor application to add a static route to the configuration of a network connection. The procedure below configures the following routes: An IPv4 route to the remote 198.51.100.0/24 network. The corresponding gateway with the IP address 192.0.2.10 is reachable through the example connection. An IPv6 route to the remote 2001:db8:2::/64 network. The corresponding gateway with the IP address 2001:db8:1::10 is reachable through the example connection. Prerequisites The network is configured. This host is in the same IP subnet as the gateways. Procedure Open a terminal, and enter nm-connection-editor : Select the example connection profile, and click the gear wheel icon to edit the existing connection. On the IPv4 Settings tab: Click the Routes button. Click the Add button and enter the address, netmask, gateway, and optionally a metric value. Click OK . On the IPv6 Settings tab: Click the Routes button. Click the Add button and enter the address, netmask, gateway, and optionally a metric value. Click OK . Click Save . Restart the network connection for changes to take effect. For example, to restart the example connection using the command line: Verification Display the IPv4 routes: Display the IPv6 routes: 26.7. Configuring a static route by using the nmcli interactive mode You can use the interactive mode of the nmcli utility to add a static route to the configuration of a network connection. The procedure below configures the following routes: An IPv4 route to the remote 198.51.100.0/24 network. The corresponding gateway with the IP address 192.0.2.10 is reachable through the example connection. An IPv6 route to the remote 2001:db8:2::/64 network. The corresponding gateway with the IP address 2001:db8:1::10 is reachable through the example connection. Prerequisites The example connection profile exists and it configures this host to be in the same IP subnet as the gateways. Procedure Open the nmcli interactive mode for the example connection: Add the static IPv4 route: Add the static IPv6 route: Optional: Verify that the routes were added correctly to the configuration: The ip attribute displays the network to route and the nh attribute the gateway ( hop). Save the configuration: Restart the network connection: Leave the nmcli interactive mode: Verification Display the IPv4 routes: Display the IPv6 routes: Additional resources nmcli(1) and nm-settings-nmcli(5) man pages on your system 26.8. Configuring a static route by using nmstatectl Use the nmstatectl utility to configure a static route through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Prerequisites The enp1s0 network interface is configured and is in the same IP subnet as the gateways. The nmstate package is installed. Procedure Create a YAML file, for example ~/add-static-route-to-enp1s0.yml , with the following content: --- routes: config: - destination: 198.51.100.0/24 -hop-address: 192.0.2.10 -hop-interface: enp1s0 - destination: 2001:db8:2::/64 -hop-address: 2001:db8:1::10 -hop-interface: enp1s0 These settings define the following static routes: An IPv4 route to the remote 198.51.100.0/24 network. The corresponding gateway with the IP address 192.0.2.10 is reachable through the enp1s0 interface. An IPv6 route to the remote 2001:db8:2::/64 network. The corresponding gateway with the IP address 2001:db8:1::10 is reachable through the enp1s0 interface. Apply the settings to the system: Verification Display the IPv4 routes: Display the IPv6 routes: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 26.9. Configuring a static route by using the network RHEL system role A static route ensures that you can send traffic to a destination that cannot be reached through the default gateway. You configure static routes in the NetworkManager connection profile of the interface that is connected to the same network as the hop. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the IPv4 routes: Display the IPv6 routes: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory | [
"nmcli connection modify connection_name ipv4.routes \" ip [/ prefix ] [ next_hop ] [ metric ] [ attribute = value ] [ attribute = value ] ...\"",
"nmcli connection modify connection_name +ipv4.routes \" <route> \"",
"nmcli connection modify connection_name -ipv4.routes \" <route> \"",
"nmcli connection modify LAN +ipv4.routes \"198.51.100.0/24 192.0.2.10\"",
"nmcli connection modify <connection_profile> +ipv4.routes \" <remote_network_1> / <subnet_mask_1> <gateway_1> , <remote_network_n> / <subnet_mask_n> <gateway_n> , ...\"",
"nmcli connection modify LAN +ipv6.routes \"2001:db8:2::/64 2001:db8:1::10\"",
"nmcli connection up LAN",
"ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0",
"ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium",
"nmtui",
"ip route 192.0.2.0/24 via 198.51.100.1 dev example proto static metric 100",
"ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0",
"ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium",
"nm-connection-editor",
"nmcli connection up example",
"ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0",
"ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium",
"nmcli connection edit example",
"nmcli> set ipv4.routes 198.51.100.0/24 192.0.2.10",
"nmcli> set ipv6.routes 2001:db8:2::/64 2001:db8:1::10",
"nmcli> print ipv4.routes: { ip = 198.51.100.0/24 , nh = 192.0.2.10 } ipv6.routes: { ip = 2001:db8:2::/64 , nh = 2001:db8:1::10 }",
"nmcli> save persistent",
"nmcli> activate example",
"nmcli> quit",
"ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0",
"ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium",
"--- routes: config: - destination: 198.51.100.0/24 next-hop-address: 192.0.2.10 next-hop-interface: enp1s0 - destination: 2001:db8:2::/64 next-hop-address: 2001:db8:1::10 next-hop-interface: enp1s0",
"nmstatectl apply ~/add-static-route-to-enp1s0.yml",
"ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0",
"ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> 198.51.100.0/24 via 192.0.2.10 dev enp7s0",
"ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref medium"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-static-routes_configuring-and-managing-networking |
Chapter 104. Verifying your IdM and AD trust configuration using IdM Healthcheck | Chapter 104. Verifying your IdM and AD trust configuration using IdM Healthcheck Learn more about identifying issues with IdM and an Active Directory trust in Identity Management (IdM) by using the Healthcheck tool. Prerequisites The Healthcheck tool is only available on RHEL 8.1 or newer 104.1. IdM and AD trust Healthcheck tests The Healthcheck tool includes several tests for testing the status of your Identity Management (IdM) and Active Directory (AD) trust. To see all trust tests, run ipa-healthcheck with the --list-sources option: You can find all tests under the ipahealthcheck.ipa.trust source: IPATrustAgentCheck This test checks the SSSD configuration when the machine is configured as a trust agent. For each domain in /etc/sssd/sssd.conf where id_provider=ipa ensure that ipa_server_mode is True . IPATrustDomainsCheck This test checks if the trust domains match SSSD domains by comparing the list of domains in sssctl domain-list with the list of domains from ipa trust-find excluding the IPA domain. IPATrustCatalogCheck This test resolves resolves an AD user, Administrator@REALM . This populates the AD Global catalog and AD Domain Controller values in sssctl domain-status output. For each trust domain look up the user with the id of the SID + 500 (the administrator) and then check the output of sssctl domain-status <domain> --active-server to ensure that the domain is active. IPAsidgenpluginCheck This test verifies that the sidgen plugin is enabled in the IPA 389-ds instance. The test also verifies that the IPA SIDGEN and ipa-sidgen-task plugins in cn=plugins,cn=config include the nsslapd-pluginEnabled option. IPATrustAgentMemberCheck This test verifies that the current host is a member of cn=adtrust agents,cn=sysaccounts,cn=etc,SUFFIX . IPATrustControllerPrincipalCheck This test verifies that the current host is a member of cn=adtrust agents,cn=sysaccounts,cn=etc,SUFFIX . IPATrustControllerServiceCheck This test verifies that the current host starts the ADTRUST service in ipactl. IPATrustControllerConfCheck This test verifies that ldapi is enabled for the passdb backend in the output of net conf list. IPATrustControllerGroupSIDCheck This test verifies that the admins group's SID ends with 512 (Domain Admins RID). IPATrustPackageCheck This test verifies that the trust-ad package is installed if the trust controller and AD trust are not enabled. Note Run these tests on all IdM servers when trying to find an issue. 104.2. Screening the trust with the Healthcheck tool Follow this procedure to run a standalone manual test of an Identity Management (IdM) and Active Directory (AD) trust health check using the Healthcheck tool. The Healthcheck tool includes many tests, therefore, you can shorten the results by: Excluding all successful test: --failures-only Including only trust tests: --source=ipahealthcheck.ipa.trust Procedure To run Healthcheck with warnings, errors and critical issues in the trust, enter: Successful test displays empty brackets: Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.ipa.trust --failures-only",
"ipa-healthcheck --source=ipahealthcheck.ipa.trust --failures-only []"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/verifying-your-idm-and-ad-trust-configuration-using-idm-healthcheck_configuring-and-managing-idm |
Getting Started Guide | Getting Started Guide Red Hat JBoss Enterprise Application Platform 7.4 Instructions for downloading, installing, starting, stopping, and maintaining Red Hat JBoss Enterprise Application Platform. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_guide/index |
6.14.5. Configuring Redundant Ring Protocol | 6.14.5. Configuring Redundant Ring Protocol As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configuration of redundant ring protocol. When using redundant ring protocol, there are a variety of considerations you must take into account, as described in Section 8.6, "Configuring Redundant Ring Protocol" . To specify a second network interface to use for redundant ring protocol, you add an alternate name for the node using the --addalt option of the ccs command: For example, the following command configures the alternate name clusternet-node1-eth2 for the cluster node clusternet-node1-eth1 : Optionally, you can manually specify a multicast address, a port, and a TTL for the second ring. If you specify a multicast for the second ring, either the alternate multicast address or the alternate port must be different from the multicast address for the first ring. If you specify an alternate port, the port numbers of the first ring and the second ring must differ by at least two, since the system itself uses port and port-1 to perform operations. If you do not specify an alternate multicast address, the system will automatically use a different multicast address for the second ring. To specify an alternate multicast address, port, or TTL for the second ring, you use the --setaltmulticast option of the ccs command: For example, the following command sets an alternate multicast address of 239.192.99.88, a port of 888, and a TTL of 3 for the cluster defined in the cluster.conf file on node clusternet-node1-eth1 : To remove an alternate multicast address, specify the --setaltmulticast option of the ccs command but do not specify a multicast address. Note that executing this command resets all other properties that you can set with the --setaltmulticast option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . When you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | [
"ccs -h host --addalt node_name alt_name",
"ccs -h clusternet-node1-eth1 --addalt clusternet-node1-eth1 clusternet-node1-eth2",
"ccs -h host --setaltmulticast [ alt_multicast_address ] [ alt_multicast_options ].",
"ccs -h clusternet-node1-eth1 --setaltmulticast 239.192.99.88 port=888 ttl=3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-rrp-ccs-ca |
Chapter 101. ReplicasChangeStatus schema reference | Chapter 101. ReplicasChangeStatus schema reference Used in: KafkaTopicStatus Property Property type Description targetReplicas integer The target replicas value requested by the user. This may be different from .spec.replicas when a change is ongoing. state string (one of [ongoing, pending]) Current state of the replicas change operation. This can be pending , when the change has been requested, or ongoing , when the change has been successfully submitted to Cruise Control. message string Message for the user related to the replicas change request. This may contain transient error messages that would disappear on periodic reconciliations. sessionId string The session identifier for replicas change requests pertaining to this KafkaTopic resource. This is used by the Topic Operator to track the status of ongoing replicas change operations. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ReplicasChangeStatus-reference |
Chapter 2. Understanding OpenShift updates | Chapter 2. Understanding OpenShift updates 2.1. Introduction to OpenShift updates With OpenShift Container Platform 4, you can update an OpenShift Container Platform cluster with a single operation by using the web console or the OpenShift CLI ( oc ). Platform administrators can view new update options either by going to Administration Cluster Settings in the web console or by looking at the output of the oc adm upgrade command. Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities based on the OpenShift Container Platform release images in the official registry. The graph contains update information for any public OCP release. OpenShift Container Platform clusters are configured to connect to the OSUS by default, and the OSUS responds to clusters with information about known update targets. An update begins when either a cluster administrator or an automatic update controller edits the custom resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with the newly specified version, the CVO retrieves the target release image from an image registry and begins to apply changes to the cluster. Note Operators previously installed through Operator Lifecycle Manager (OLM) follow a different process for updates. See Updating installed Operators for more information. The target release image contains manifest files for all cluster components that form a specific OCP version. When updating the cluster to a new version, the CVO applies manifests in separate stages called Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new specified version. The CVO monitors the state of each applied resource and the states reported by all cluster Operators. The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel reach a stable condition. After the CVO updates the entire control plane through this process, the Machine Config Operator (MCO) updates the operating system and configuration of every node in the cluster. 2.1.1. Common questions about update availability There are several factors that affect if and when an update is made available to an OpenShift Container Platform cluster. The following list provides common questions regarding the availability of an update: What are the differences between each of the update channels? A new release is initially added to the candidate channel. After successful final testing, a release on the candidate channel is promoted to the fast channel, an errata is published, and the release is now fully supported. After a delay, a release on the fast channel is finally promoted to the stable channel. This delay represents the only difference between the fast and stable channels. Note For the latest z-stream releases, this delay may generally be a week or two. However, the delay for initial updates to the latest minor version may take much longer, generally 45-90 days. Releases promoted to the stable channel are simultaneously promoted to the eus channel. The primary purpose of the eus channel is to serve as a convenience for clusters performing an EUS-to-EUS update. Is a release on the stable channel safer or more supported than a release on the fast channel? If a regression is identified for a release on a fast channel, it will be resolved and managed to the same extent as if that regression was identified for a release on the stable channel. The only difference between releases on the fast and stable channels is that a release only appears on the stable channel after it has been on the fast channel for some time, which provides more time for new update risks to be discovered. A release that is available on the fast channel always becomes available on the stable channel after this delay. What does it mean if an update is supported but not recommended? Red Hat continuously evaluates data from multiple sources to determine whether updates from one version to another lead to issues. If an issue is identified, an update path may no longer be recommended to users. However, even if the update path is not recommended, customers are still supported if they perform the update. Red Hat does not block users from updating to a certain version. Red Hat may declare conditional update risks, which may or may not apply to a particular cluster. Declared risks provide cluster administrators more context about a supported update. Cluster administrators can still accept the risk and update to that particular target version. This update is always supported despite not being recommended in the context of the conditional risk. What if I see that an update to a particular release is no longer recommended? If Red Hat removes update recommendations from any supported release due to a regression, a superseding update recommendation will be provided to a future version that corrects the regression. There may be a delay while the defect is corrected, tested, and promoted to your selected channel. How long until the z-stream release is made available on the fast and stable channels? While the specific cadence can vary based on a number of factors, new z-stream releases for the latest minor version are typically made available about every week. Older minor versions, which have become more stable over time, may take much longer for new z-stream releases to be made available. Important These are only estimates based on past data about z-stream releases. Red Hat reserves the right to change the release frequency as needed. Any number of issues could cause irregularities and delays in this release cadence. Once a z-stream release is published, it also appears in the fast channel for that minor version. After a delay, the z-stream release may then appear in that minor version's stable channel. Additional resources Understanding update channels and releases 2.1.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 2.1.3. Common terms Control plane The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. Cluster Version Operator The Cluster Version Operator (CVO) starts the update process for the cluster. It checks with OSUS based on the current cluster version and retrieves the graph which contains available or possible update paths. Machine Config Operator The Machine Config Operator (MCO) is a cluster-level Operator that manages the operating system and machine configurations. Through the MCO, platform administrators can configure and update systemd, CRI-O and Kubelet, the kernel, NetworkManager, and other system features on the worker nodes. OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including to Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. Channels Channels declare an update strategy tied to minor versions of OpenShift Container Platform. The OSUS uses this configured strategy to recommend update edges consistent with that strategy. Recommended update edge A recommended update edge is a recommended update between OpenShift Container Platform releases. Whether a given update is recommended can depend on the cluster's configured channel, current version, known bugs, and other information. OSUS communicates the recommended edges to the CVO, which runs in every cluster. Extended Update Support All post-4.7 even-numbered minor releases are labeled as Extended Update Support (EUS) releases. These releases introduce a verified update path between EUS releases, permitting customers to streamline updates of worker nodes and formulate update strategies of EUS-to-EUS OpenShift Container Platform releases that result in fewer reboots of worker nodes. For more information, see Red Hat OpenShift Extended Update Support (EUS) Overview . Additional resources Machine config overview Using the OpenShift Update Service in a disconnected environment Update channels 2.1.4. Additional resources For more detailed information about each major aspect of the update process, see How cluster updates work . 2.2. How cluster updates work The following sections describe each major aspect of the OpenShift Container Platform (OCP) update process in detail. For a general overview of how updates work, see the Introduction to OpenShift updates . 2.2.1. The Cluster Version Operator The Cluster Version Operator (CVO) is the primary component that orchestrates and facilitates the OpenShift Container Platform update process. During installation and standard cluster operation, the CVO is constantly comparing the manifests of managed cluster Operators to in-cluster resources, and reconciling discrepancies to ensure that the actual state of these resources match their desired state. 2.2.1.1. The ClusterVersion object One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion resource. Administrators and OpenShift components can communicate or interact with the CVO through the ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the current CVO state is reflected in the object's status. Note Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc CLI or the web console to declare your update target. The CVO continually reconciles the cluster with the target state declared in the spec property of the ClusterVersion resource. When the desired release differs from the actual release, that reconciliation updates the cluster. Update availability data The ClusterVersion resource also contains information about updates that are available to the cluster. This includes updates that are available, but not recommended due to a known risk that applies to the cluster. These updates are known as conditional updates. To learn how the CVO maintains this information about available updates in the ClusterVersion resource, see the "Evaluation of update availability" section. You can inspect all available updates with the following command: USD oc adm upgrade --include-not-recommended Note The additional --include-not-recommended parameter includes updates that are available but not recommended due to a known risk that applies to the cluster. Example output Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22 The oc adm upgrade command queries the ClusterVersion resource for information about available updates and presents it in a human-readable format. One way to directly inspect the underlying availability data created by the CVO is by querying the ClusterVersion resource with the following command: USD oc get clusterversion version -o json | jq '.status.availableUpdates' Example output [ { "channels": [ "candidate-4.11", "candidate-4.12", "fast-4.11", "fast-4.12" ], "image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775", "url": "https://access.redhat.com/errata/RHBA-2023:3213", "version": "4.11.41" }, ... ] A similar command can be used to check conditional updates: USD oc get clusterversion version -o json | jq '.status.conditionalUpdates' Example output [ { "conditions": [ { "lastTransitionTime": "2023-05-30T16:28:59Z", "message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136", "reason": "PatchesOlderRelease", "status": "False", "type": "Recommended" } ], "release": { "channels": [...], "image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d", "url": "https://access.redhat.com/errata/RHBA-2023:1733", "version": "4.11.36" }, "risks": [...] }, ... ] 2.2.1.2. Evaluation of update availability The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities. This data is based on the cluster's subscribed channel. The CVO then saves information about update recommendations into either the availableUpdates or conditionalUpdates field of its ClusterVersion resource. The CVO periodically checks the conditional updates for update risks. These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version. Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform. The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the availableUpdates field of its ClusterVersion resource. The user interface, either the web console or the OpenShift CLI ( oc ), presents this information in sectioned headings to the administrator. Each supported but not recommended update recommendation contains a link to further resources about the risk so that the administrator can make an informed decision about the update. Additional resources Update recommendation removals and Conditional Updates 2.2.2. Release images A release image is the delivery mechanism for a specific OpenShift Container Platform (OCP) version. It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version. You can inspect the content of a specific release image by running the following command: USD oc adm release extract <release image> Example output USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z USD ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 ... 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata 1 Manifest for ClusterResourceQuota CRD, to be applied on Runlevel 03 2 Manifest for PrometheusRoleBinding resource for the service-ca-operator , to be applied on Runlevel 90 3 List of SHA digest-versioned references to all required images 2.2.3. Update process workflow The following steps represent a detailed workflow of the OpenShift Container Platform (OCP) update process: The target version is stored in the spec.desiredUpdate.version field of the ClusterVersion resource, which may be managed through the web console or the CLI. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion resource differs from the current cluster version. Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image. The CVO validates the integrity and authenticity of the release image. Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image. The CVO creates a job named version-USDversion-USDhash in the openshift-cluster-version namespace. This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime. The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO. The CVO validates the extracted manifests and metadata. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster. Certain conditions can prevent updates from proceeding. These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update. The CVO records the accepted release in status.desired and creates a status.history entry about the new update. The CVO begins reconciling the manifests from the release image. Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the level. Manifests for the CVO itself are applied early in the process. When the CVO deployment is applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new CVO proceeds to reconcile the remaining manifests. The update proceeds until the entire control plane is updated to the new version. Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the Progressing=True condition. The Machine Config Operator (MCO) manifests are applied towards the end of the process. The updated MCO then begins updating the system configuration and operating system of every node. Each node might be drained, updated, and rebooted before it starts to accept workloads again. The cluster reports as updated after the control plane update is finished, usually before all nodes are updated. After the update, the CVO maintains all cluster resources to match the state delivered in the release image. 2.2.4. Understanding how manifests are applied during an update Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them. For example, the CustomResourceDefinition resource must be created before the matching custom resources. Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster. The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels. These dependencies are encoded in the filenames of the manifests in the release image: 0000_<runlevel>_<component>_<manifest-name>.yaml For example: 0000_03_config-operator_01_proxy.crd.yaml The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules: During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel. Within one Runlevel, manifests for different components can be applied in parallel. Within one Runlevel, manifests for a single component are applied in lexicographic order. The CVO then applies manifests following the generated dependency graph. Note For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state. Achieving this state can take some time. This is especially true for ClusterOperator resources, while the CVO waits for a cluster Operator to update itself and then update its ClusterOperator status. The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the Runlevel: The cluster Operators have an Available=True condition. The cluster Operators have a Degraded=False condition. The cluster Operators declare they have achieved the desired version in their ClusterOperator resource. Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release's manifests is expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform update duration for more information about factors that influence update duration. In the example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-operator ClusterOperator declares this progress through the Progressing=True condition and by not declaring the new version as reconciled in its status.versions . The CVO waits until the ClusterOperator reports an acceptable status, and then it will start reconciling manifests at Runlevel 25. Additional resources Understanding OpenShift Container Platform update duration 2.2.5. Understanding how the Machine Config Operator updates nodes The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The .spec.maxUnavailable parameter, which has a default value of 1 , determines how many nodes in a machine config pool can simultaneously undergo the update process. When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of .spec.maxUnavailable , the MCO initiates the following sequence of actions on available nodes in the pool: Cordon and drain the node Note When a node is cordoned, workloads cannot be scheduled to it. Update the system configuration and operating system (OS) of the node Reboot the node Uncordon the node A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of .spec.maxUnavailable . As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than .spec.maxUnavailable . If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is once again reached. This process repeats until each control plane node and compute node has been updated. The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where .spec.maxUnavailable is 3 and all nodes are initially available: The MCO cordons nodes 1, 2, and 3, and begins to drain them. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it. Node 3 finishes draining, reboots, and becomes available again. Node 5 finishes draining, reboots, and becomes available again. Node 4 finishes draining, reboots, and becomes available again. Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO. You can check the status of the machine configuration update by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Additional resources Machine config overview | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.10.22 Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.11 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) Recommended updates: VERSION IMAGE 4.10.26 quay.io/openshift-release-dev/ocp-release@sha256:e1fa1f513068082d97d78be643c369398b0e6820afab708d26acda2262940954 4.10.25 quay.io/openshift-release-dev/ocp-release@sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc 4.10.24 quay.io/openshift-release-dev/ocp-release@sha256:aab51636460b5a9757b736a29bc92ada6e6e6282e46b06e6fd483063d590d62a 4.10.23 quay.io/openshift-release-dev/ocp-release@sha256:e40e49d722cb36a95fa1c03002942b967ccbd7d68de10e003f0baa69abad457b Supported but not recommended updates: Version: 4.11.0 Image: quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4 Recommended: False Reason: RPMOSTreeTimeout Message: Nodes with substantial numbers of containers and CPU contention may not reconcile machine configuration https://bugzilla.redhat.com/show_bug.cgi?id=2111817#c22",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/updating_clusters/understanding-openshift-updates-1 |
Release Notes for Red Hat Fuse 7.13 | Release Notes for Red Hat Fuse 7.13 Red Hat Fuse 7.13 What's new in Red Hat Fuse Red Hat Fuse Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/release_notes_for_red_hat_fuse_7.13/index |
Chapter 12. Getting Started with OptaPlanner and Quarkus | Chapter 12. Getting Started with OptaPlanner and Quarkus You can use the https://code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. You can then download the Quarkus Maven repository or use the online Maven repository with your project. 12.1. Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. Maven repositories A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Quarkus projects or you can download the Red Hat build of Quarkus Maven repository. Maven plug-ins Maven plug-ins are defined parts of a POM file that achieve one or more goals. Quarkus applications use the following Maven plug-ins: Quarkus Maven plug-in ( quarkus-maven-plugin ): Enables Maven to create Quarkus projects, supports the generation of uber-JAR files, and provides a development mode. Maven Surefire plug-in ( maven-surefire-plugin ): Used during the test phase of the build lifecycle to execute unit tests on your application. The plug-in generates text and XML files that contain the test reports. 12.1.1. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 12.1.2. Downloading and configuring the Quarkus Maven repository If you do not want to use the online Maven repository, you can download and configure the Quarkus Maven repository to create a Quarkus application with Maven. The Quarkus Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the settings.xml file to configure the Quarkus Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Download the Red Hat build of Quarkus Maven repository ZIP file from the Software Downloads page of the Red Hat Customer Portal (login required). Expand the downloaded archive. Change directory to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the settings.xml file, where QUARKUS_MAVEN_REPOSITORY is the path of the Quarkus Maven repository that you downloaded. The format of QUARKUS_MAVEN_REPOSITORY must be file://USDPATH , for example file:///home/userX/rh-quarkus-2.13.GA-maven-repository/maven-repository . <!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where ARTIFACT_NAME is the name of a missing artifact and PROJECT_NAME is the name of the project you are trying to build: Missing artifact PROJECT_NAME [ERROR] Failed to execute goal on project ARTIFACT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts. 12.2. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 12.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 12.3. Creating a Red Hat build of Quarkus Maven project using code.quarkus.redhat.com You can use the code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters required to compile your project into a native executable. This section walks you through the process of generating an OptaPlanner Maven project and includes the following topics: Specifying basic details about your application. Choosing the extensions that you want to include in your project. Generating a downloadable archive with your project files. Using the custom commands for compiling and starting your application. Prerequisites You have a web browser. Procedure Open https://code.quarkus.redhat.com in your web browser: Specify details about your project: Enter a group name for your project. The format of the name follows the Java package naming convention, for example, com.example . Enter a name that you want to use for Maven artifacts generated from your project, for example code-with-quarkus . Select Build Tool > Maven to specify that you want to create a Maven project. The build tool that you choose determines the items: The directory structure of your generated project The format of configuration files used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create OptaPlanner Maven projects only. Generating Gradle projects is not supported by Red Hat. Enter a version to be used in artifacts generated from your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended, but you can use a different type of versioning if you prefer. Enter the package name of artifacts that the build tool generates when you package your project. According to the Java package naming conventions the package name should match the group name that you use for your project, but you can specify a different name. Note The code.quarkus.redhat.com website automatically uses the latest release of OptaPlanner. You can manually change the BOM version in the pom.xml file after you generate your project. Select the following extensions to include as dependencies: RESTEasy JAX-RS (quarkus-resteasy) RESTEasy Jackson (quarkus-resteasy-jackson) OptaPlanner AI constraint solver(optaplanner-quarkus) OptaPlanner Jackson (optaplanner-quarkus-jackson) Red Hat provides different levels of support for individual extensions on the list, which are indicated by labels to the name of each extension: SUPPORTED extensions are fully supported by Red Hat for use in enterprise applications in production environments. TECH-PREVIEW extensions are subject to limited support by Red Hat in production environments under the Technology Preview Features Support Scope . DEV-SUPPORT extensions are not supported by Red Hat for use in production environments, but the core functionalities that they provide are supported by Red Hat developers for use in developing new applications. DEPRECATED extension are planned to be replaced with a newer technology or implementation that provides the same functionality. Unlabeled extensions are not supported by Red Hat for use in production environments. Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. The overlay screen also shows the custom command that you can use to compile and start your application. Select Download the ZIP to save the archive with the generated project files to your system. Extract the contents of the archive. Navigate to the directory that contains your extracted project files: cd <directory_name> Compile and start your application in development mode: ./mvnw compile quarkus:dev 12.4. Creating a Red Hat build of Quarkus Maven project using the Quarkus CLI You can use the Quarkus command line interface (CLI) to create a Quarkus OptaPlanner project. Prerequisites You have installed the Quarkus CLI. For information, see Building Quarkus Apps with Quarkus Command Line Interface . Procedure Create a Quarkus application: To view the available extensions, enter the following command: This command returns the following extensions: Enter the following command to add extensions to the project's pom.xml file: Open the pom.xml file in a text editor. The contents of the file should look similar to the following example: | [
"<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"<!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"mvn --version",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples",
"cd optaplanner-quickstart",
"<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>",
"cd <directory_name>",
"./mvnw compile quarkus:dev",
"quarkus create app -P io.quarkus:quarkus-bom:2.13.Final-redhat-00006",
"quarkus ext -i",
"optaplanner-quarkus optaplanner-quarkus-benchmark optaplanner-quarkus-jackson optaplanner-quarkus-jsonb",
"quarkus ext add resteasy-jackson quarkus ext add optaplanner-quarkus quarkus ext add optaplanner-quarkus-jackson",
"<?xml version=\"1.0\"?> <project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>code-with-quarkus-optaplanner</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <compiler-plugin.version>3.8.1</compiler-plugin.version> <maven.compiler.parameters>true</maven.compiler.parameters> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.version>2.13.Final-redhat-00006</quarkus.platform.version> <surefire-plugin.version>3.0.0-M5</surefire-plugin.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>optaplanner-quarkus</artifactId> <version>2.2.2.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-arc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <parameters>USD{maven.compiler.parameters}</parameters> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <properties> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> </project>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/optaplanner-quarkus-con_getting-started-optaplanner |
probe::udp.recvmsg.return | probe::udp.recvmsg.return Name probe::udp.recvmsg.return - Fires whenever an attempt to receive a UDP message received is completed Synopsis udp.recvmsg.return Values name The name of this probe family IP address family dport UDP destination port saddr A string representing the source IP address sport UDP source port size Number of bytes received by the process daddr A string representing the destination IP address Context The process which received a UDP message | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-udp-recvmsg-return |
Chapter 5. Installing the single-model serving platform | Chapter 5. Installing the single-model serving platform 5.1. About the single-model serving platform For deploying large models such as large language models (LLMs), OpenShift AI includes a single-model serving platform that is based on the KServe component. To install the single-model serving platform, the following components are required: KServe : A Kubernetes custom resource definition (CRD) that orchestrates model serving for all types of models. KServe includes model-serving runtimes that implement the loading of given types of model servers. KServe also handles the lifecycle of the deployment object, storage access, and networking setup. Red Hat OpenShift Serverless : A cloud-native development model that allows for serverless deployments of models. OpenShift Serverless is based on the open source Knative project. Red Hat OpenShift Service Mesh : A service mesh networking layer that manages traffic flows and enforces access policies. OpenShift Service Mesh is based on the open source Istio project. Note Currently, only OpenShift Service Mesh v2 is supported. For more information, see Supported Configurations . You can install the single-model serving platform manually or in an automated fashion: Automated installation If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. For more information, see Configuring automated installation of KServe Manual installation If you have already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you cannot configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. In this situation, you must install KServe manually. For more information, see Manually installing KServe . 5.2. Configuring automated installation of KServe If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. Important If you have created a ServiceMeshControlPlane or KNativeServing resource on your cluster, the Red Hat OpenShift AI Operator cannot install KServe and configure its dependencies and the installation does not proceed. In this situation, you must follow the manual installation instructions to install KServe. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have installed the Red Hat OpenShift Service Mesh Operator and dependent Operators. Note To enable automated installation of KServe, install only the required Operators for Red Hat OpenShift Service Mesh. Do not perform any additional configuration or create a ServiceMeshControlPlane resource. You have installed the Red Hat OpenShift Serverless Operator. Note To enable automated installation of KServe, install only the Red Hat OpenShift Serverless Operator. Do not perform any additional configuration or create a KNativeServing resource. You have installed the Red Hat OpenShift AI Operator and created a DataScienceCluster object. To add Authorino as an authorization provider so that you can enable token authentication for deployed models, you have installed the Red Hat - Authorino Operator. See Installing the Authorino Operator . Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Install OpenShift Service Mesh as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, validate that the value of the managementState field for the serviceMesh component is set to Managed , as shown: Note Do not change the istio-system namespace that is specified for the serviceMesh component by default. Other namespace values are not supported. Click Save . Based on the configuration you added to the DSCInitialization object, the Red Hat OpenShift AI Operator installs OpenShift Service Mesh. Install both KServe and OpenShift Serverless as follows: In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click the default-dsc DSC object. Click the YAML tab. In the spec.components section, configure the kserve component as shown. Click Save . The preceding configuration creates an ingress gateway for OpenShift Serverless to receive traffic from OpenShift Service Mesh. In this configuration, observe the following details: The configuration shown uses the default ingress certificate configured for OpenShift to secure incoming traffic to your OpenShift cluster and stores the certificate in the knative-serving-cert secret that is specified in the secretName field. The secretName field can only be set at the time of installation. The default value of the secretName field is knative-serving-cert . Subsequent changes to the certificate secret must be made manually. If you did not use the default secretName value during installation, create a new secret named knative-serving-cert in the istio-system namespace, and then restart the istiod-datascience-smcp-<suffix> pod. You can specify the following certificate types by updating the value of the type field: Provided SelfSigned OpenshiftDefaultIngress To use a self-signed certificate or to provide your own, update the value of the secretName field to specify your secret name and change the value of the type field to SelfSigned or Provided . Note If you provide your own certificate, the certificate must specify the domain name used by the ingress controller of your OpenShift cluster. You can check this value by running the following command: USD oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' You must set the value of the managementState field to Managed for both the kserve and serving components. Setting kserve.managementState to Managed triggers automated installation of KServe. Setting serving.managementState to Managed triggers automated installation of OpenShift Serverless. However, installation of OpenShift Serverless will not be triggered if kserve.managementState is not also set to Managed . Verification Verify installation of OpenShift Service Mesh as follows: In the web console, click Workloads Pods . From the project list, select istio-system . This is the project in which OpenShift Service Mesh is installed. Confirm that there are running pods for the service mesh control plane, ingress gateway, and egress gateway. These pods have the naming patterns shown in the following example: Verify installation of OpenShift Serverless as follows: In the web console, click Workloads Pods . From the project list, select knative-serving . This is the project in which OpenShift Serverless is installed. Confirm that there are numerous running pods in the knative-serving project, including activator, autoscaler, controller, and domain mapping pods, as well as pods for the Knative Istio controller (which controls the integration of OpenShift Serverless and OpenShift Service Mesh). An example is shown. Verify installation of KServe as follows: In the web console, click Workloads Pods . From the project list, select redhat-ods-applications .This is the project in which OpenShift AI components are installed, including KServe. Confirm that the project includes a running pod for the KServe controller manager, similar to the following example: 5.3. Manually installing KServe If you have already installed the Red Hat OpenShift Service Mesh Operator and created a ServiceMeshControlPlane resource or if you have installed the Red Hat OpenShift Serverless Operator and created a KNativeServing resource, the Red Hat OpenShift AI Operator cannot install KServe and configure its dependencies. In this situation, you must install KServe manually. Important The procedures in this section show how to perform a new installation of KServe and its dependencies and are intended as a complete installation and configuration reference. If you have already installed and configured OpenShift Service Mesh or OpenShift Serverless, you might not need to follow all steps. If you are unsure about what updates to apply to your existing configuration to use KServe, contact Red Hat Support. 5.3.1. Installing KServe dependencies Before you install KServe, you must install and configure some dependencies. Specifically, you must create Red Hat OpenShift Service Mesh and Knative Serving instances and then configure secure gateways for Knative Serving. Note Currently, only OpenShift Service Mesh v2 is supported. For more information, see Supported Configurations . 5.3.2. Creating an OpenShift Service Mesh instance The following procedure shows how to create a Red Hat OpenShift Service Mesh instance. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have installed the Red Hat OpenShift Service Mesh Operator and dependent Operators. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Create the required namespace for Red Hat OpenShift Service Mesh. You see the following output: Define a ServiceMeshControlPlane object in a YAML file named smcp.yaml with the following contents: For more information about the values in the YAML file, see the Service Mesh control plane configuration reference . Create the service mesh control plane. Verification Verify creation of the service mesh instance as follows: In the OpenShift CLI, enter the following command: The preceding command lists all running pods in the istio-system project. This is the project in which OpenShift Service Mesh is installed. Confirm that there are running pods for the service mesh control plane, ingress gateway, and egress gateway. These pods have the following naming patterns: 5.3.3. Creating a Knative Serving instance The following procedure shows how to install Knative Serving and then create an instance. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have installed the Red Hat OpenShift Serverless Operator. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Check whether the required project (that is, namespace ) for Knative Serving already exists. If the project exists, you see output similar to the following example: If the knative-serving project doesn't already exist, create it. You see the following output: Define a ServiceMeshMember object in a YAML file called default-smm.yaml with the following contents: Create the ServiceMeshMember object in the istio-system namespace. You see the following output: Define a KnativeServing object in a YAML file called knativeserving-istio.yaml with the following contents: The preceding file defines a custom resource (CR) for a KnativeServing object. The CR also adds the following actions to each of the activator and autoscaler pods: 1 Injects an Istio sidecar to the pod. This makes the pod part of the service mesh. 2 Enables the Istio sidecar to rewrite the HTTP liveness and readiness probes for the pod. Note If you configure a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created. For more information, see Securing a mapped service using a TLS certificate in the Red Hat OpenShift Serverless documentation. Create the KnativeServing object in the specified knative-serving namespace. You see the following output: Verification Review the default ServiceMeshMemberRoll object in the istio-system namespace. In the description of the ServiceMeshMemberRoll object, locate the Status.Members field and confirm that it includes the knative-serving namespace. Verify creation of the Knative Serving instance as follows: In the OpenShift CLI, enter the following command: The preceding command lists all running pods in the knative-serving project. This is the project in which you created the Knative Serving instance. Confirm that there are numerous running pods in the knative-serving project, including activator, autoscaler, controller, and domain mapping pods, as well as pods for the Knative Istio controller, which controls the integration of OpenShift Serverless and OpenShift Service Mesh. An example is shown. 5.3.4. Creating secure gateways for Knative Serving To secure traffic between your Knative Serving instance and the service mesh, you must create secure gateways for your Knative Serving instance. The following procedure shows how to use OpenSSL version 3 or later to generate a wildcard certificate and key and then use them to create local and ingress gateways for Knative Serving. Important If you have your own wildcard certificate and key to specify when configuring the gateways, you can skip to step 11 of this procedure. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have created a Knative Serving instance. If you intend to generate a wildcard certificate and key, you have downloaded and installed OpenSSL version 3 or later. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Important If you have your own wildcard certificate and key to specify when configuring the gateways, skip to step 11 of this procedure. Set environment variables to define base directories for generation of a wildcard certificate and key for the gateways. Set an environment variable to define the common name used by the ingress controller of your OpenShift cluster. Set an environment variable to define the domain name used by the ingress controller of your OpenShift cluster. Create the required base directories for the certificate generation, based on the environment variables that you previously set. Create the OpenSSL configuration for generation of a wildcard certificate. Generate a root certificate. Generate a wildcard certificate signed by the root certificate. Verify the wildcard certificate. Export the wildcard key and certificate that were created by the script to new environment variables. Optional: To export your own wildcard key and certificate to new environment variables, enter the following commands: Note In the certificate that you provide, you must specify the domain name used by the ingress controller of your OpenShift cluster. You can check this value by running the following command: USD oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' Create a TLS secret in the istio-system namespace using the environment variables that you set for the wildcard certificate and key. Create a gateways.yaml YAML file with the following contents: 1 Defines a service in the istio-system namespace for the Knative local gateway. 2 Defines an ingress gateway in the knative-serving namespace . The gateway uses the TLS secret you created earlier in this procedure. The ingress gateway handles external traffic to Knative. 3 Defines a local gateway for Knative in the knative-serving namespace. Apply the gateways.yaml file to create the defined resources. You see the following output: Verification Review the gateways that you created. Confirm that you see the local and ingress gateways that you created in the knative-serving namespace, as shown in the following example: 5.3.5. Installing KServe To complete manual installation of KServe, you must install the Red Hat OpenShift AI Operator. Then, you can configure the Operator to install KServe. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have created a Knative Serving instance. You have created secure gateways for Knative Serving. You have installed the Red Hat OpenShift AI Operator and created a DataScienceCluster object. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. For installation of KServe, configure the OpenShift Service Mesh component as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, add and configure the serviceMesh component as shown: Click Save . For installation of KServe, configure the KServe and OpenShift Serverless components as follows: In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click the default-dsc DSC object. Click the YAML tab. In the spec.components section, configure the kserve component as shown: Within the kserve component, add the serving component, and configure it as shown: Click Save . 5.3.6. Configuring persistent volume claims (PVC) on KServe Enable persistent volume claims (PVC) on your inference service so you can provison persistent storage. For more information about PVC, see Understanding persistent storage . To enable PVC, from the OpenShift AI dashboard, select the Project drop-down and click knative-serving . Then, follow the steps in Enabling PVC support . Verification Verify that the inference service allows PVC as follows: In the OpenShift web console, change into the Administrator perspective. Click Home Search . In Resources , search for InferenceService . Click the name of the inference service. Click the YAML tab. Confirm that volumeMounts appears, similar to the following output: 5.3.7. Disabling KServe dependencies If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. Prerequisites You have used the OpenShift command-line interface (CLI) or web console to disable the KServe component. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Disable the OpenShift Service Mesh component as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, add the serviceMesh component (if it is not already present) and configure the managementState field as shown: Click Save . Verification In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. The Operator details page opens. In the Conditions section, confirm that there is no ReconcileComplete condition with a status value of Unknown . 5.4. Adding an authorization provider for the single-model serving platform You can add Authorino as an authorization provider for the single-model serving platform. Adding an authorization provider allows you to enable token authentication for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models. The method that you use to add Authorino as an authorization provider depends on how you install the single-model serving platform. The installation options for the platform are described as follows: Automated installation If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and its dependencies. You can include Authorino as part of the automated installation process. For more information about automated installation, including Authorino, see Configuring automated installation of KServe . Manual installation If you have already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you cannot configure the Red Hat OpenShift AI Operator to install KServe and its dependencies. In this situation, you must install KServe manually. You must also manually configure Authorino. For more information about manual installation, including Authorino, see Manually installing KServe . 5.4.1. Manually adding an authorization provider You can add Authorino as an authorization provider for the single-model serving platform. Adding an authorization provider allows you to enable token authentication for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models. To manually add Authorino as an authorization provider, you must install the Red Hat - Authorino Operator, create an Authorino instance, and then configure the OpenShift Service Mesh and KServe components to use the instance. Important To manually add an authorization provider, you must make configuration updates to your OpenShift Service Mesh instance. To ensure that your OpenShift Service Mesh instance remains in a supported state, make only the updates shown in this section. Prerequisites You have reviewed the options for adding Authorino as an authorization provider and identified manual installation as the appropriate option. See Adding an authorization provider . You have manually installed KServe and its dependencies, including OpenShift Service Mesh. See Manually installing KServe . When you manually installed KServe, you set the value of the managementState field for the serviceMesh component to Unmanaged . This setting is required for manually adding Authorino. See Installing KServe . 5.4.2. Installing the Red Hat Authorino Operator Before you can add Autorino as an authorization provider, you must install the Red Hat - Authorino Operator on your OpenShift cluster. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators OperatorHub . On the OperatorHub page, in the Filter by keyword field, type Red Hat - Authorino . Click the Red Hat - Authorino Operator. On the Red Hat - Authorino Operator page, review the Operator information and then click Install . On the Install Operator page, keep the default values for Update channel , Version , Installation mode , Installed Namespace and Update Approval . Click Install . Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat - Authorino Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. 5.4.3. Creating an Authorino instance When you have installed the Red Hat - Authorino Operator on your OpenShift cluster, you must create an Authorino instance. Prerequisites You have installed the Red Hat - Authorino Operator. You have privileges to add resources to the project in which your OpenShift Service Mesh instance was created. See Creating an OpenShift Service Mesh instance . For more information about OpenShift permissions, see Using RBAC to define and apply permissions . Procedure Open a new terminal window. Log in to the OpenShift command-line interface (CLI) as follows: Create a namespace to install the Authorino instance. Note The automated installation process creates a namespace called redhat-ods-applications-auth-provider for the Authorino instance. Consider using the same namespace name for the manual installation. To enroll the new namespace for the Authorino instance in your existing OpenShift Service Mesh instance, create a new YAML file with the following contents: Save the YAML file. Create the ServiceMeshMember resource on your cluster. To configure an Authorino instance, create a new YAML file as shown in the following example: Save the YAML file. Create the Authorino resource on your cluster. Patch the Authorino deployment to inject an Istio sidecar, which makes the Authorino instance part of your OpenShift Service Mesh instance. Verification Confirm that the Authorino instance is running as follows: Check the pods (and containers) that are running in the namespace that you created for the Authorino instance, as shown in the following example: Confirm that the output resembles the following example: As shown in the example, there is a single running pod for the Authorino instance. The pod has containers for Authorino and for the Istio sidecar that you injected. 5.4.4. Configuring an OpenShift Service Mesh instance to use Authorino When you have created an Authorino instance, you must configure your OpenShift Service Mesh instance to use Authorino as an authorization provider. Important To ensure that your OpenShift Service Mesh instance remains in a supported state, make only the configuration updates shown in the following procedure. Prerequisites You have created an Authorino instance and enrolled the namespace for the Authorino instance in your OpenShift Service Mesh instance. You have privileges to modify the OpenShift Service Mesh instance. See Creating an OpenShift Service Mesh instance . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a user that has privileges to update the OpenShift Service Mesh instance, log in to the OpenShift CLI as shown in the following example: Create a new YAML file with the following contents: Save the YAML file. Use the oc patch command to apply the YAML file to your OpenShift Service Mesh instance. Important You can apply the configuration shown as a patch only if you have not already specified other extension providers in your OpenShift Service Mesh instance. If you have already specified other extension providers, you must manually edit your ServiceMeshControlPlane resource to add the configuration. Verification Verify that your Authorino instance has been added as an extension provider in your OpenShift Service Mesh configuration as follows: Inspect the ConfigMap object for your OpenShift Service Mesh instance: Confirm that you see output similar to the following example, which shows that the Authorino instance has been successfully added as an extension provider. 5.4.5. Configuring authorization for KServe To configure the single-model serving platform to use Authorino, you must create a global AuthorizationPolicy resource that is applied to the KServe predictor pods that are created when you deploy a model. In addition, to account for the multiple network hops that occur when you make an inference request to a model, you must create an EnvoyFilter resource that continually resets the HTTP host header to the one initially included in the inference request. Prerequisites You have created an Authorino instance and configured your OpenShift Service Mesh to use it. You have privileges to update the KServe deployment on your cluster. You have privileges to add resources to the project in which your OpenShift Service Mesh instance was created. See Creating an OpenShift Service Mesh instance . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a user that has privileges to update the KServe deployment, log in to the OpenShift CLI as shown in the following example: Create a new YAML file with the following contents: 1 The name that you specify must match the name of the extension provider that you added to your OpenShift Service Mesh instance. Save the YAML file. Create the AuthorizationPolicy resource in the namespace for your OpenShift Service Mesh instance. Create another new YAML file with the following contents: The EnvoyFilter resource shown continually resets the HTTP host header to the one initially included in any inference request. Create the EnvoyFilter resource in the namespace for your OpenShift Service Mesh instance. Verification Check that the AuthorizationPolicy resource was successfully created. Confirm that you see output similar to the following example: Check that the EnvoyFilter resource was successfully created. Confirm that you see output similar to the following example: | [
"spec: applicationsNamespace: redhat-ods-applications monitoring: managementState: Managed namespace: redhat-ods-monitoring serviceMesh: controlPlane: metricsCollection: Istio name: data-science-smcp namespace: istio-system managementState: Managed",
"spec: components: kserve: managementState: Managed serving: ingressGateway: certificate: secretName: knative-serving-cert type: OpenshiftDefaultIngress managementState: Managed name: knative-serving",
"NAME READY STATUS RESTARTS AGE istio-egressgateway-7c46668687-fzsqj 1/1 Running 0 22h istio-ingressgateway-77f94d8f85-fhsp9 1/1 Running 0 22h istiod-data-science-smcp-cc8cfd9b8-2rkg4 1/1 Running 0 22h",
"NAME READY STATUS RESTARTS AGE activator-7586f6f744-nvdlb 2/2 Running 0 22h activator-7586f6f744-sd77w 2/2 Running 0 22h autoscaler-764fdf5d45-p2v98 2/2 Running 0 22h autoscaler-764fdf5d45-x7dc6 2/2 Running 0 22h autoscaler-hpa-7c7c4cd96d-2lkzg 1/1 Running 0 22h autoscaler-hpa-7c7c4cd96d-gks9j 1/1 Running 0 22h controller-5fdfc9567c-6cj9d 1/1 Running 0 22h controller-5fdfc9567c-bf5x7 1/1 Running 0 22h domain-mapping-56ccd85968-2hjvp 1/1 Running 0 22h domain-mapping-56ccd85968-lg6mw 1/1 Running 0 22h domainmapping-webhook-769b88695c-gp2hk 1/1 Running 0 22h domainmapping-webhook-769b88695c-npn8g 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jb4xk 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jxs5p 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-bgd5r 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-hld75 1/1 Running 0 22h webhook-7d49878bc4-8xjbr 1/1 Running 0 22h webhook-7d49878bc4-s4xx4 1/1 Running 0 22h",
"NAME READY STATUS RESTARTS AGE kserve-controller-manager-7fbb7bccd4-t4c5g 1/1 Running 0 22h odh-model-controller-6c4759cc9b-cftmk 1/1 Running 0 129m odh-model-controller-6c4759cc9b-ngj8b 1/1 Running 0 129m odh-model-controller-6c4759cc9b-vnhq5 1/1 Running 0 129m",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"oc create ns istio-system",
"namespace/istio-system created",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: minimal namespace: istio-system spec: tracing: type: None addons: grafana: enabled: false kiali: name: kiali enabled: false prometheus: enabled: false jaeger: name: jaeger security: dataPlane: mtls: true identity: type: ThirdParty techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s gateways: ingress: service: metadata: labels: knative: ingressgateway proxy: networking: trafficControl: inbound: excludedPorts: - 8444 - 8022",
"oc apply -f smcp.yaml",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE istio-egressgateway-7c46668687-fzsqj 1/1 Running 0 22h istio-ingressgateway-77f94d8f85-fhsp9 1/1 Running 0 22h istiod-data-science-smcp-cc8cfd9b8-2rkg4 1/1 Running 0 22h",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"oc get ns knative-serving",
"NAME STATUS AGE knative-serving Active 4d20h",
"oc create ns knative-serving",
"namespace/knative-serving created",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: knative-serving spec: controlPlaneRef: namespace: istio-system name: minimal",
"oc apply -f default-smm.yaml",
"servicemeshmember.maistra.io/default created",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/default-enable-http2: \"true\" spec: workloads: - name: net-istio-controller env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'true' - annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2 name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler ingress: istio: enabled: true config: features: kubernetes.podspec-affinity: enabled kubernetes.podspec-nodeselector: enabled kubernetes.podspec-tolerations: enabled",
"oc apply -f knativeserving-istio.yaml",
"knativeserving.operator.knative.dev/knative-serving created",
"oc describe smmr default -n istio-system",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-7586f6f744-nvdlb 2/2 Running 0 22h activator-7586f6f744-sd77w 2/2 Running 0 22h autoscaler-764fdf5d45-p2v98 2/2 Running 0 22h autoscaler-764fdf5d45-x7dc6 2/2 Running 0 22h autoscaler-hpa-7c7c4cd96d-2lkzg 1/1 Running 0 22h autoscaler-hpa-7c7c4cd96d-gks9j 1/1 Running 0 22h controller-5fdfc9567c-6cj9d 1/1 Running 0 22h controller-5fdfc9567c-bf5x7 1/1 Running 0 22h domain-mapping-56ccd85968-2hjvp 1/1 Running 0 22h domain-mapping-56ccd85968-lg6mw 1/1 Running 0 22h domainmapping-webhook-769b88695c-gp2hk 1/1 Running 0 22h domainmapping-webhook-769b88695c-npn8g 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jb4xk 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jxs5p 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-bgd5r 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-hld75 1/1 Running 0 22h webhook-7d49878bc4-8xjbr 1/1 Running 0 22h webhook-7d49878bc4-s4xx4 1/1 Running 0 22h",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"export BASE_DIR=/tmp/kserve export BASE_CERT_DIR=USD{BASE_DIR}/certs",
"export COMMON_NAME=USD(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' | awk -F'.' '{print USD(NF-1)\".\"USDNF}')",
"export DOMAIN_NAME=USD(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}')",
"mkdir USD{BASE_DIR} mkdir USD{BASE_CERT_DIR}",
"cat <<EOF> USD{BASE_DIR}/openssl-san.config [ req ] distinguished_name = req [ san ] subjectAltName = DNS:*.USD{DOMAIN_NAME} EOF",
"openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -subj \"/O=Example Inc./CN=USD{COMMON_NAME}\" -keyout USD{BASE_CERT_DIR}/root.key -out USD{BASE_CERT_DIR}/root.crt",
"openssl req -x509 -newkey rsa:2048 -sha256 -days 3560 -nodes -subj \"/CN=USD{COMMON_NAME}/O=Example Inc.\" -extensions san -config USD{BASE_DIR}/openssl-san.config -CA USD{BASE_CERT_DIR}/root.crt -CAkey USD{BASE_CERT_DIR}/root.key -keyout USD{BASE_CERT_DIR}/wildcard.key -out USD{BASE_CERT_DIR}/wildcard.crt openssl x509 -in USD{BASE_CERT_DIR}/wildcard.crt -text",
"openssl verify -CAfile USD{BASE_CERT_DIR}/root.crt USD{BASE_CERT_DIR}/wildcard.crt",
"export TARGET_CUSTOM_CERT=USD{BASE_CERT_DIR}/wildcard.crt export TARGET_CUSTOM_KEY=USD{BASE_CERT_DIR}/wildcard.key",
"export TARGET_CUSTOM_CERT= <path_to_certificate> export TARGET_CUSTOM_KEY= <path_to_key>",
"oc create secret tls wildcard-certs --cert=USD{TARGET_CUSTOM_CERT} --key=USD{TARGET_CUSTOM_KEY} -n istio-system",
"apiVersion: v1 kind: Service 1 metadata: labels: experimental.istio.io/disable-gateway-port-translation: \"true\" name: knative-local-gateway namespace: istio-system spec: ports: - name: http2 port: 80 protocol: TCP targetPort: 8081 selector: knative: ingressgateway type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-ingress-gateway 2 namespace: knative-serving spec: selector: knative: ingressgateway servers: - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: credentialName: wildcard-certs mode: SIMPLE --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-local-gateway 3 namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS tls: mode: ISTIO_MUTUAL hosts: - \"*\"",
"oc apply -f gateways.yaml",
"service/knative-local-gateway created gateway.networking.istio.io/knative-ingress-gateway created gateway.networking.istio.io/knative-local-gateway created",
"oc get gateway --all-namespaces",
"NAMESPACE NAME AGE knative-serving knative-ingress-gateway 69s knative-serving knative-local-gateway 2m",
"spec: serviceMesh: managementState: Unmanaged",
"spec: components: kserve: managementState: Managed",
"spec: components: kserve: managementState: Managed serving: managementState: Unmanaged",
"apiVersion: \"serving.kserve.io/v1beta1\" kind: \"InferenceService\" metadata: name: \"sklearn-iris\" spec: predictor: model: runtime: kserve-mlserver modelFormat: name: sklearn storageUri: \"gs://kfserving-examples/models/sklearn/1.0/model\" volumeMounts: - name: my-dynamic-volume mountPath: /tmp/data volumes: - name: my-dynamic-volume persistentVolumeClaim: claimName: my-dynamic-pvc",
"spec: serviceMesh: managementState: Removed",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"oc new-project <namespace_for_authorino_instance>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: <namespace_for_authorino_instance> spec: controlPlaneRef: namespace: <namespace_for_service_mesh_instance> name: <name_of_service_mesh_instance>",
"oc create -f <file_name> .yaml",
"apiVersion: operator.authorino.kuadrant.io/v1beta1 kind: Authorino metadata: name: authorino namespace: <namespace_for_authorino_instance> spec: authConfigLabelSelectors: security.opendatahub.io/authorization-group=default clusterWide: true listener: tls: enabled: false oidcServer: tls: enabled: false",
"oc create -f <file_name> .yaml",
"oc patch deployment <name_of_authorino_instance> -n <namespace_for_authorino_instance> -p '{\"spec\": {\"template\":{\"metadata\":{\"labels\":{\"sidecar.istio.io/inject\":\"true\"}}}} }'",
"oc get pods -n redhat-ods-applications-auth-provider -o=\"custom-columns=NAME:.metadata.name,STATUS:.status.phase,CONTAINERS:.spec.containers[*].name\"",
"NAME STATUS CONTAINERS authorino-6bc64bd667-kn28z Running authorino,istio-proxy",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"spec: techPreview: meshConfig: extensionProviders: - name: redhat-ods-applications-auth-provider envoyExtAuthzGrpc: service: <name_of_authorino_instance> -authorino-authorization. <namespace_for_authorino_instance> .svc.cluster.local port: 50051",
"oc patch smcp <name_of_service_mesh_instance> --type merge -n <namespace_for_service_mesh_instance> --patch-file <file_name> .yaml",
"oc get configmap istio- <name_of_service_mesh_instance> -n <namespace_for_service_mesh_instance> --output=jsonpath= {.data.mesh}",
"defaultConfig: discoveryAddress: istiod-data-science-smcp.istio-system.svc:15012 proxyMetadata: ISTIO_META_DNS_AUTO_ALLOCATE: \"true\" ISTIO_META_DNS_CAPTURE: \"true\" PROXY_XDS_VIA_AGENT: \"true\" terminationDrainDuration: 35s tracing: {} dnsRefreshRate: 300s enablePrometheusMerge: true extensionProviders: - envoyExtAuthzGrpc: port: 50051 service: authorino-authorino-authorization.opendatahub-auth-provider.svc.cluster.local name: opendatahub-auth-provider ingressControllerMode: \"OFF\" rootNamespace: istio-system trustDomain: null%",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: kserve-predictor spec: action: CUSTOM provider: name: redhat-ods-applications-auth-provider 1 rules: - to: - operation: notPaths: - /healthz - /debug/pprof/ - /metrics - /wait-for-drain selector: matchLabels: component: predictor",
"oc create -n <namespace_for_service_mesh_instance> -f <file_name> .yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: activator-host-header spec: priority: 20 workloadSelector: labels: component: predictor configPatches: - applyTo: HTTP_FILTER match: listener: filterChain: filter: name: envoy.filters.network.http_connection_manager patch: operation: INSERT_BEFORE value: name: envoy.filters.http.lua typed_config: '@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(request_handle) local headers = request_handle:headers() if not headers then return end local original_host = headers:get(\"k-original-host\") if original_host then port_seperator = string.find(original_host, \":\", 7) if port_seperator then original_host = string.sub(original_host, 0, port_seperator-1) end headers:replace('host', original_host) end end",
"oc create -n <namespace_for_service_mesh_instance> -f <file_name> .yaml",
"oc get authorizationpolicies -n <namespace_for_service_mesh_instance>",
"NAME AGE kserve-predictor 28h",
"oc get envoyfilter -n <namespace_for_service_mesh_instance>",
"NAME AGE activator-host-header 28h"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/installing-the-single-model-serving-platform_component-install |
Chapter 4. Configuring the OpenStack Integration Test Suite | Chapter 4. Configuring the OpenStack Integration Test Suite 4.1. Creating a Workspace Source the credentials for the target deployment: If the target is in the undercloud, source the credentials for the undercloud: If the target is in the overcloud, source the credentials for the overcloud: Initialize tempest : This command creates a tempest workspace named mytempest . Run the following command to view a list of existing workspaces: Generate the etc/tempest.conf file: Replace UUID with the UUID of the external network. discover-tempest-config was formerly called config_tempest.py and takes the same parameters. It is provided by python-tempestconf which is installed as a dependency of openstack-tempest . Note To generate the etc/tempest.conf file for the undercloud, ensure that the region name in the tempest-deployer-input.conf file is the same as the name in the undercloud deployment. If these names do not match, update the region name in the tempest-deployer-input.conf file to match the region name of your undercloud. To inspect the region name of your undercloud, run the following commands: To inspect the region name of your overcloud, run the following commands: 4.2. Configuring Tempest Manually The discover-tempest-config command generates the tempest.conf file automatically. However, you must ensure that the tempest.conf file corresponds to the configuration of your environment. 4.2.1. Configuring Tempest Extension Lists Manually The default tempest.conf file contains lists of extensions for each component. Inspect the api_extensions attribute for each component in the tempest.conf file and verify that the lists of extensions correspond to your deployment. If the extensions that are available in your deployment do not correspond to the list of extensions in the api_extensions attribute of the tempest.conf file, the component fails tempest tests. To prevent this failure, you must identify the extensions that are available in your deployment and include them in the api_extensions parameter. To get a list of Network, Compute, Volume, or Identity extensions in your deployment, run the following command: 4.2.2. Configuring heat_plugin Manually Configure the heat plugin manually according to your deployment configuration. The following example contains the minimum tempest.conf configuration for heat_plugin : Use the openstack network list command to identify networks for the fixed_network_name , network_for_ssh , and floating_network_name parmameters. Note You must set heat to True in the [service_available] section of the tempest.conf file, and the user in the username attribute of the [heat_plugin] section must have the role member . For example, run the following command to add the member role to the demo user: 4.3. Verifying Your Tempest Configuration Verify your current tempest configuration: output is the output file where Tempest writes your updated configuration. This is different from your original configuration file. 4.4. Changing the Logging Configuration The default location for log files is the logs directory within your tempest workspace. To change this directory, in tempest.conf , under the [DEFAULT] section, set log_dir to the desired directory: If you have your own logging configuration file, in tempest.conf , under the [DEFAULT] section, set log_config_append to your file: If you set the log_config_append attribute, Tempest ignores all other logging configuration in tempest.conf , including the log_dir attribute. 4.5. Configuring Microversion Tests The OpenStack Integration Test Suite provides stable interfaces to test the API microversions. This section contains information about implementing microversion tests using these interfaces. First, you must configure options in the tempest.conf configuration file to specify the target microversions. Configure these options to ensure that the supported microversions correspond to the microversions used in the OpenStack cloud. You can specify a range of target microversions to run multiple microversion tests in a single Integration Test Suite operation. For example, to limit the range of microversions for the compute service, in the [compute] section of your configuration file, assign values to the min_microversion and max_microversion parameters: | [
"source stackrc",
"source overcloudrc",
"tempest init mytempest cd mytempest",
"tempest workspace list",
"discover-tempest-config --deployer-input ~/tempest-deployer-input.conf --debug --create --network-id <UUID>",
"source stackrc openstack region list",
"source overcloudrc openstack region list",
"openstack extension list [--network] [--compute] [--volume] [--identity]",
"[service_available] heat = True [heat_plugin] username = demo password = *** project_name = demo admin_username = admin admin_password = **** admin_project_name = admin auth_url = http://10.0.0.110:5000//v3 auth_version = 3 user_domain_id = default project_domain_id = default user_domain_name = Default project_domain_name = Default region = regionOne fixed_network_name = demo_project_network network_for_ssh = public floating_network_name = nova instance_type = m1.nano minimal_instance_type = m1.micro image_ref = 7faed41e-a56c-4971-bf48-24e4e23e69a5 minimal_image_ref = 7faed41e-a56c-4971-bf48-24e4e23e69a5",
"openstack role add --user demo --project demo member",
"tempest verify-config -o <output>",
"[DEFAULT] log_dir = <directory>",
"[DEFAULT] log_config_append = <file>",
"[compute] min_microversion = 2.14 max_microversion = latest"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/openstack_integration_test_suite_guide/chap-configure-tempest |
Chapter 7. Running and interpreting hardware and firmware latency tests | Chapter 7. Running and interpreting hardware and firmware latency tests With the hwlatdetect program, you can test and verify if a potential hardware platform is suitable for using real-time operations. Prerequisites Ensure that the RHEL-RT (RHEL for Real Time) and realtime-tests packages are installed. Check the vendor documentation for any tuning steps required for low latency operation. The vendor documentation can provide instructions to reduce or remove any System Management Interrupts (SMIs) that would transition the system into System Management Mode (SMM). While a system is in SMM, it runs firmware and not operating system code. This means that any timers that expire while in SMM wait until the system transitions back to normal operation. This can cause unexplained latencies, because SMIs cannot be blocked by Linux, and the only indication that we actually took an SMI can be found in vendor-specific performance counter registers. Warning Red Hat strongly recommends that you do not completely disable SMIs, as it can result in catastrophic hardware failure. 7.1. Running hardware and firmware latency tests It is not required to run any load on the system while running the hwlatdetect program, because the test looks for latencies introduced by the hardware architecture or BIOS or EFI firmware. The default values for hwlatdetect are to poll for 0.5 seconds each second, and report any gaps greater than 10 microseconds between consecutive calls to fetch the time. hwlatdetect returns the best maximum latency possible on the system. Therefore, if you have an application that requires maximum latency values of less than 10us and hwlatdetect reports one of the gaps as 20us, then the system can only guarantee latency of 20us. Note If hwlatdetect shows that the system cannot meet the latency requirements of the application, try changing the BIOS settings or working with the system vendor to get new firmware that meets the latency requirements of the application. Prerequisites Ensure that the RHEL-RT and realtime-tests packages are installed. Procedure Run hwlatdetect , specifying the test duration in seconds. hwlatdetect looks for hardware and firmware-induced latencies by polling the clock-source and looking for unexplained gaps. Additional resources hwlatdetect man page on your system Interpreting hardware and firmware latency tests 7.2. Interpreting hardware and firmware latency test results The hardware latency detector ( hwlatdetect ) uses the tracer mechanism to detect latencies introduced by the hardware architecture or BIOS/EFI firmware. By checking the latencies measured by hwlatdetect , you can determine if a potential hardware is suitable to support the RHEL for Real Time kernel. Examples The example result represents a system tuned to minimize system interruptions from firmware. In this situation, the output of hwlatdetect looks like this: The example result represents a system that could not be tuned to minimize system interruptions from firmware. In this situation, the output of hwlatdetect looks like this: The output shows that during the consecutive reads of the system clocksource , there were 10 delays that showed up in the 15-18 us range. Note versions used a kernel module rather than the ftrace tracer. Understanding the results The information on testing method, parameters, and results helps you understand the latency parameters and the latency values detected by the hwlatdetect utility. The table for Testing method, parameters, and results, lists the parameters and the latency values detected by the hwlatdetect utility. Table 7.1. Testing method, parameters, and results Parameter Value Description test duration 10 seconds The duration of the test in seconds detector tracer The utility that runs the detector thread parameters Latency threshold 10us The maximum allowable latency Sample window 1000000us 1 second Sample width 500000us 0.05 seconds Non-sampling period 500000us 0.05 seconds Output File None The file to which the output is saved. Results Max Latency 18us The highest latency during the test that exceeded the Latency threshold . If no sample exceeded the Latency threshold , the report shows Below threshold . Samples recorded 10 The number of samples recorded by the test. Samples exceeding threshold 10 The number of samples recorded by the test where the latency exceeded the Latency threshold . SMIs during run 0 The number of System Management Interrupts (SMIs) that occurred during the test run. Note The values printed by the hwlatdetect utility for inner and outer are the maximum latency values. They are deltas between consecutive reads of the current system clocksource (usually the TSC or TSC register, but potentially the HPET or ACPI power management clock) and any delays between consecutive reads introduced by the hardware-firmware combination. After finding the suitable hardware-firmware combination, the step is to test the real-time performance of the system while under a load. | [
"hwlatdetect --duration=60s hwlatdetect: test duration 60 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 Samples exceeding threshold: 0",
"hwlatdetect --duration=60s hwlatdetect: test duration 60 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 Samples exceeding threshold: 0",
"hwlatdetect --duration=10s hwlatdetect: test duration 10 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: 18us Samples recorded: 10 Samples exceeding threshold: 10 SMIs during run: 0 ts: 1519674281.220664736, inner:17, outer:15 ts: 1519674282.721666674, inner:18, outer:17 ts: 1519674283.722667966, inner:16, outer:17 ts: 1519674284.723669259, inner:17, outer:18 ts: 1519674285.724670551, inner:16, outer:17 ts: 1519674286.725671843, inner:17, outer:17 ts: 1519674287.726673136, inner:17, outer:16 ts: 1519674288.727674428, inner:16, outer:18 ts: 1519674289.728675721, inner:17, outer:17 ts: 1519674290.729677013, inner:18, outer:17----"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_running-and-interpreting-hardware-and-firmware-latency-tests_optimizing-RHEL9-for-real-time-for-low-latency-operation |
8.221. tomcat6 | 8.221. tomcat6 8.221.1. RHBA-2013:1721 - tomcat6 bug fix update Updated tomcat6 packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Apache Tomcat is a servlet container for the Java Servlet and JavaServer Pages (JSP) technologies. Bug Fixes BZ# 845786 Previously, an attempt to build the tomcat6-docs-webapp package failed when Red Hat Enterprise Linux was running on IBM System z or 64-bit IBM POWER Series computers. With this update, no architecture is set in the build target and the package can be built as expected. BZ# 915447 When a user, whose name did not correspond to any existing group name, was specified in the /etc/sysconfig/tomcat6 file, the Tomcat web server failed to start. This update applies a patch to fix this bug and Tomcat no longer fails in the described scenario. BZ# 950647 Due to a bug in the checkpidfile() function, an attempt to execute the "service tomcat6 status" command failed and an error message was returned. The underlying source code has been modified to fix this bug and the command now works properly. BZ# 960255 Due to a bug in the checkpidfile() function, the status script did not return the correct PID. This bug has been fixed and the status script now returns the correct PID as expected. BZ# 977685 The Tomcat web server included a version of the tomcat-juli.jar file that was hard coded to use classes from the java.util.logging package instead of the log4j framework. Consequently, Tomcat could not be configured to use log4j unless the complete version of the tomcat-juli.jar and tomcat-juli-adapters.jar files had been downloaded. With this update, the tomcat6 packages now contain the correct versions of these files to configure log4j. BZ# 989527 When multiple tomcat instances were configured as described in the /etc/sysconfig/tomcat6 configuration file and the instance name was different from the name of the tomcat directory, the "service status" command failed. With this update, the underlying source code has been modified to fix this bug and the command no longer fails in the described scenario. Users of tomcat6 are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/tomcat6 |
Chapter 8. Security authorization with role-based access control | Chapter 8. Security authorization with role-based access control Role-based access control (RBAC) capabilities use different permissions levels to restrict user interactions with Data Grid. Note For information on creating users and configuring authorization specific to remote or embedded caches, see: Configuring user roles and permissions with Data Grid Server Programmatically configuring user roles and permissions 8.1. Data Grid user roles and permissions Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources. Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics via JMX and the metrics endpoint. Additional resources org.infinispan.security.AuthorizationPermission Enum Data Grid configuration schema reference 8.1.1. Permissions User roles are sets of permissions with different access levels. Table 8.1. Cache Manager permissions Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a Cache Manager. LIFECYCLE stop Stops the Cache Manager. CREATE createCache , removeCache Create and remove container resources such as caches, counters, schemas, and scripts. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all Cache Manager permissions. Table 8.2. Cache permissions Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Additional resources Data Grid Security API 8.1.2. Role and permission mappers Data Grid implements users as a collection of principals. Principals represent either an individual user identity, such as a username, or a group to which the users belong. Internally, these are implemented with the javax.security.auth.Subject class. To enable authorization, the principals must be mapped to role names, which are then expanded into a set of permissions. Data Grid includes the PrincipalRoleMapper API for associating security principals to roles, and the RolePermissionMapper API for associating roles with specific permissions. Data Grid provides the following role and permission mapper implementations: Cluster role mapper Stores principal to role mappings in the cluster registry. Cluster permission mapper Stores role to permission mappings in the cluster registry. Allows you to dynamically modify user roles and permissions. Identity role mapper Uses the principal name as the role name. The type or format of the principal name depends on the source. For example, in an LDAP directory the principal name could be a Distinguished Name (DN). Common name role mapper Uses the Common Name (CN) as the role name. You can use this role mapper with an LDAP directory or with client certificates that contain Distinguished Names (DN); for example cn=managers,ou=people,dc=example,dc=com maps to the managers role. Note By default, principal-to-role mapping is only applied to principals which represent groups. It is possible to configure Data Grid to also perform the mapping for user principals by setting the authorization.group-only-mapping configuration attribute to false . 8.1.2.1. Mapping users to roles and permissions in Data Grid Consider the following user retrieved from an LDAP server, as a collection of DNs: Using the Common name role mapper , the user would be mapped to the following roles: Data Grid has the following role definitions: The user would have the following permissions: Additional resources Data Grid Security API org.infinispan.security.PrincipalRoleMapper org.infinispan.security.RolePermissionMapper org.infinispan.security.mappers.IdentityRoleMapper org.infinispan.security.mappers.CommonNameRoleMapper 8.1.3. Configuring role mappers Data Grid enables the cluster role mapper and cluster permission mapper by default. To use a different implementation for role mapping, you must configure the role mappers. Procedure Open your Data Grid configuration for editing. Declare the role mapper as part of the security authorization in the Cache Manager configuration. Save the changes to your configuration. Role mapper configuration XML <cache-container> <security> <authorization> <common-name-role-mapper /> </authorization> </security> </cache-container> JSON { "infinispan" : { "cache-container" : { "security" : { "authorization" : { "common-name-role-mapper": {} } } } } } YAML infinispan: cacheContainer: security: authorization: commonNameRoleMapper: ~ Additional resources Data Grid configuration schema reference 8.1.4. Configuring the cluster role and permission mappers The cluster role mapper maintains a dynamic mapping between principals and roles. The cluster permission mapper maintains a dynamic set of role definitions. In both cases, the mappings are stored in the cluster registry and can be manipulated at runtime using either the CLI or the REST API. Prerequisites Have ADMIN permissions for Data Grid. Start the Data Grid CLI. Connect to a running Data Grid cluster. 8.1.4.1. Creating new roles Create new roles and set the permissions. Procedure Create roles with the user roles create command, for example: Verification List roles that you grant to users with the user roles ls command. Describe roles with the user roles describe command. 8.1.4.2. Granting roles to users Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources. Tip Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Assign roles to users with the user roles grant command, for example: Verification List roles that you grant to users with the user roles ls command. 8.1.4.3. Cluster role mapper name rewriters By default, the mapping is performed using a strict string equivalence between principal names and roles. It is possible to configure the cluster role mapper to apply transformation to the principal name before performing a lookup. Procedure Open your Data Grid configuration for editing. Specify a name rewriter for the cluster role mapper as part of the security authorization in the Cache Manager configuration. Save the changes to your configuration. Principal names may have different forms, depending on the security realm type: Properties and Token realms may return simple strings Trust and LDAP realms may return X.500-style distinguished names Kerberos realms may return user@domain -style names Names can be normalized to a common form using one of the following transformers: 8.1.4.3.1. Case Principal Transformer XML <cache-container> <security> <authorization> <cluster-role-mapper> <name-rewriter> <case-principal-transformer uppercase="false"/> </name-rewriter> </cluster-role-mapper> </authorization> </security> </cache-container> JSON { "cache-container": { "security": { "authorization": { "cluster-role-mapper": { "name-rewriter": { "case-principal-transformer": {} } } } } } } YAML cacheContainer: security: authorization: clusterRoleMapper: nameRewriter: casePrincipalTransformer: uppercase: false 8.1.4.3.2. Regex Principal Transformer XML <cache-container> <security> <authorization> <cluster-role-mapper> <name-rewriter> <regex-principal-transformer pattern="cn=([^,]+),.*" replacement="USD1"/> </name-rewriter> </cluster-role-mapper> </authorization> </security> </cache-container> JSON { "cache-container": { "security": { "authorization": { "cluster-role-mapper": { "name-rewriter": { "regex-principal-transformer": { "pattern": "cn=([^,]+),.*", "replacement": "USD1" } } } } } } } YAML cacheContainer: security: authorization: clusterRoleMapper: nameRewriter: regexPrincipalTransformer: pattern: "cn=([^,]+),.*" replacement: "USD1" Additional resources Data Grid configuration schema reference 8.2. Configuring caches with security authorization Add security authorization to caches to enforce role-based access control (RBAC). This requires Data Grid users to have a role with a sufficient level of permission to perform cache operations. Prerequisites Create Data Grid users and either grant them with roles or assign them to groups. Procedure Open your Data Grid configuration for editing. Add a security section to the configuration. Specify roles that users must have to perform cache operations with the authorization element. You can implicitly add all roles defined in the Cache Manager or explicitly define a subset of roles. Save the changes to your configuration. Implicit role configuration The following configuration implicitly adds every role defined in the Cache Manager: XML <distributed-cache> <security> <authorization/> </security> </distributed-cache> JSON { "distributed-cache": { "security": { "authorization": { "enabled": true } } } } YAML distributedCache: security: authorization: enabled: true Explicit role configuration The following configuration explicitly adds a subset of roles defined in the Cache Manager. In this case Data Grid denies cache operations for any users that do not have one of the configured roles. XML <distributed-cache> <security> <authorization roles="admin supervisor"/> </security> </distributed-cache> JSON { "distributed-cache": { "security": { "authorization": { "enabled": true, "roles": ["admin","supervisor"] } } } } YAML distributedCache: security: authorization: enabled: true roles: ["admin","supervisor"] | [
"CN=myapplication,OU=applications,DC=mycompany CN=dataprocessors,OU=groups,DC=mycompany CN=finance,OU=groups,DC=mycompany",
"dataprocessors finance",
"dataprocessors: ALL_WRITE ALL_READ finance: LISTEN",
"ALL_WRITE ALL_READ LISTEN",
"<cache-container> <security> <authorization> <common-name-role-mapper /> </authorization> </security> </cache-container>",
"{ \"infinispan\" : { \"cache-container\" : { \"security\" : { \"authorization\" : { \"common-name-role-mapper\": {} } } } } }",
"infinispan: cacheContainer: security: authorization: commonNameRoleMapper: ~",
"user roles create --permissions=ALL_READ,ALL_WRITE simple",
"user roles ls [\"observer\",\"application\",\"admin\",\"monitor\",\"simple\",\"deployer\"]",
"user roles describe simple { \"name\" : \"simple\", \"permissions\" : [ \"ALL_READ\",\"ALL_WRITE\" ] }",
"user roles grant --roles=deployer katie",
"user roles ls katie [\"deployer\"]",
"<cache-container> <security> <authorization> <cluster-role-mapper> <name-rewriter> <case-principal-transformer uppercase=\"false\"/> </name-rewriter> </cluster-role-mapper> </authorization> </security> </cache-container>",
"{ \"cache-container\": { \"security\": { \"authorization\": { \"cluster-role-mapper\": { \"name-rewriter\": { \"case-principal-transformer\": {} } } } } } }",
"cacheContainer: security: authorization: clusterRoleMapper: nameRewriter: casePrincipalTransformer: uppercase: false",
"<cache-container> <security> <authorization> <cluster-role-mapper> <name-rewriter> <regex-principal-transformer pattern=\"cn=([^,]+),.*\" replacement=\"USD1\"/> </name-rewriter> </cluster-role-mapper> </authorization> </security> </cache-container>",
"{ \"cache-container\": { \"security\": { \"authorization\": { \"cluster-role-mapper\": { \"name-rewriter\": { \"regex-principal-transformer\": { \"pattern\": \"cn=([^,]+),.*\", \"replacement\": \"USD1\" } } } } } } }",
"cacheContainer: security: authorization: clusterRoleMapper: nameRewriter: regexPrincipalTransformer: pattern: \"cn=([^,]+),.*\" replacement: \"USD1\"",
"<distributed-cache> <security> <authorization/> </security> </distributed-cache>",
"{ \"distributed-cache\": { \"security\": { \"authorization\": { \"enabled\": true } } } }",
"distributedCache: security: authorization: enabled: true",
"<distributed-cache> <security> <authorization roles=\"admin supervisor\"/> </security> </distributed-cache>",
"{ \"distributed-cache\": { \"security\": { \"authorization\": { \"enabled\": true, \"roles\": [\"admin\",\"supervisor\"] } } } }",
"distributedCache: security: authorization: enabled: true roles: [\"admin\",\"supervisor\"]"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/security-authorization |
Chapter 4. OADP Application backup and restore | Chapter 4. OADP Application backup and restore 4.1. Introduction to OpenShift API for Data Protection The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or {OCP-short} Operators. OADP support is provided to customer workload namespaces, and cluster scope resources. Full cluster backup and restore are not supported. 4.1.1. OpenShift API for Data Protection APIs OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources. OADP provides the following APIs: Backup Restore Schedule BackupStorageLocation VolumeSnapshotLocation Additional resources Backing up etcd 4.2. OADP release notes 4.2.1. OADP 1.4 release notes The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. Note For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs 4.2.1.1. OADP 1.4.3 release notes The OpenShift API for Data Protection (OADP) 1.4.3 release notes lists the following new feature. 4.2.1.1.1. New features Notable changes in the kubevirt velero plugin in version 0.7.1 With this release, the kubevirt velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. Object graphs now include all extra objects during backup and restore operations. Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. Switching VM run strategies during restore operations is now possible. Clearing a MAC address by label is now supported. The restore-specific checks during the backup operation are now skipped. The VirtualMachineClusterInstancetype and VirtualMachineClusterPreference custom resource definitions (CRDs) are now supported. 4.2.1.2. OADP 1.4.2 release notes The OpenShift API for Data Protection (OADP) 1.4.2 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.2.1. New features Backing up different volumes in the same namespace by using the VolumePolicy feature is now possible With this release, Velero provides resource policies to back up different volumes in the same namespace by using the VolumePolicy feature. The supported VolumePolicy feature to back up different volumes includes skip , snapshot , and fs-backup actions. OADP-1071 File system backup and data mover can now use short-term credentials File system backup and data mover can now use short-term credentials such as AWS Security Token Service (STS) and GCP WIF. With this support, backup is successfully completed without any PartiallyFailed status. OADP-5095 4.2.1.2.2. Resolved issues DPA now reports errors if VSL contains an incorrect provider value Previously, if the provider of a Volume Snapshot Location (VSL) spec was incorrect, the Data Protection Application (DPA) reconciled successfully. With this update, DPA reports errors and requests for a valid provider value. OADP-5044 Data Mover restore is successful irrespective of using different OADP namespaces for backup and restore Previously, when backup operation was executed by using OADP installed in one namespace but was restored by using OADP installed in a different namespace, the Data Mover restore failed. With this update, Data Mover restore is now successful. OADP-5460 SSE-C backup works with the calculated MD5 of the secret key Previously, backup failed with the following error: Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key. With this update, missing Server-Side Encryption with Customer-Provided Keys (SSE-C) base64 and MD5 hash are now fixed. As a result, SSE-C backup works with the calculated MD5 of the secret key. In addition, incorrect errorhandling for the customerKey size is also fixed. OADP-5388 For a complete list of all issues resolved in this release, see the list of OADP 1.4.2 resolved issues in Jira. 4.2.1.2.3. Known issues The nodeSelector spec is not supported for the Data Mover restore action When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. OADP-5260 The S3 storage does not use proxy environment when TLS skip verify is specified In the image registry backup, the S3 storage does not use the proxy environment when the insecureSkipTLSVerify parameter is set to true . OADP-3143 Kopia does not delete artifacts after backup expiration Even after you delete a backup, Kopia does not delete the volume artifacts from the USD{bucket_name}/kopia/USDopenshift-adp on the S3 location after backup expired. For more information, see "About Kopia repository maintenance". OADP-5131 Additional resources About Kopia repository maintenance 4.2.1.3. OADP 1.4.1 release notes The OpenShift API for Data Protection (OADP) 1.4.1 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.3.1. New features New DPA fields to update client qps and burst You can now change Velero Server Kubernetes API queries per second and burst values by using the new Data Protection Application (DPA) fields. The new DPA fields are spec.configuration.velero.client-qps and spec.configuration.velero.client-burst , which both default to 100. OADP-4076 Enabling non-default algorithms with Kopia With this update, you can now configure the hash, encryption, and splitter algorithms in Kopia to select non-default options to optimize performance for different backup workloads. To configure these algorithms, set the env variable of a velero pod in the podConfig section of the DataProtectionApplication (DPA) configuration. If this variable is not set, or an unsupported algorithm is chosen, Kopia will default to its standard algorithms. OADP-4640 4.2.1.3.2. Resolved issues Restoring a backup without pods is now successful Previously, restoring a backup without pods and having StorageClass VolumeBindingMode set as WaitForFirstConsumer , resulted in the PartiallyFailed status with an error: fail to patch dynamic PV, err: context deadline exceeded . With this update, patching dynamic PV is skipped and restoring a backup is successful without any PartiallyFailed status. OADP-4231 PodVolumeBackup CR now displays correct message Previously, the PodVolumeBackup custom resource (CR) generated an incorrect message, which was: get a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed" . With this update, the message produced is now: found a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed". OADP-4224 Overriding imagePullPolicy is now possible with DPA Previously, OADP set the imagePullPolicy parameter to Always for all images. With this update, OADP checks if each image contains sha256 or sha512 digest, then it sets imagePullPolicy to IfNotPresent ; otherwise imagePullPolicy is set to Always . You can now override this policy by using the new spec.containerImagePullPolicy DPA field. OADP-4172 OADP Velero can now retry updating the restore status if initial update fails Previously, OADP Velero failed to update the restored CR status. This left the status at InProgress indefinitely. Components which relied on the backup and restore CR status to determine the completion would fail. With this update, the restore CR status for a restore correctly proceeds to the Completed or Failed status. OADP-3227 Restoring BuildConfig Build from a different cluster is successful without any errors Previously, when performing a restore of the BuildConfig Build resource from a different cluster, the application generated an error on TLS verification to the internal image registry. The resulting error was failed to verify certificate: x509: certificate signed by unknown authority error. With this update, the restore of the BuildConfig build resources to a different cluster can proceed successfully without generating the failed to verify certificate error. OADP-4692 Restoring an empty PVC is successful Previously, downloading data failed while restoring an empty persistent volume claim (PVC). It failed with the following error: data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found With this update, the downloading of data proceeds to correct conclusion when restoring an empty PVC and the error message is not generated. OADP-3106 There is no Velero memory leak in CSI and DataMover plugins Previously, a Velero memory leak was caused by using the CSI and DataMover plugins. When the backup ended, the Velero plugin instance was not deleted and the memory leak consumed memory until an Out of Memory (OOM) condition was generated in the Velero pod. With this update, there is no resulting Velero memory leak when using the CSI and DataMover plugins. OADP-4448 Post-hook operation does not start before the related PVs are released Previously, due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the Data Mover persistent volume claim (PVC) releases the persistent volumes (PVs) of the related pods. This problem would cause the backup to fail with a PartiallyFailed status. With this update, the post-hook operation is not started until the related PVs are released by the Data Mover PVC, eliminating the PartiallyFailed backup status. OADP-3140 Deploying a DPA works as expected in namespaces with more than 37 characters When you install the OADP Operator in a namespace with more than 37 characters to create a new DPA, labeling the "cloud-credentials" Secret fails and the DPA reports the following error: With this update, creating a DPA does not fail in namespaces with more than 37 characters in the name. OADP-3960 Restore is successfully completed by overriding the timeout error Previously, in a large scale environment, the restore operation would result in a Partiallyfailed status with the error: fail to patch dynamic PV, err: context deadline exceeded . With this update, the resourceTimeout Velero server argument is used to override this timeout error resulting in a successful restore. OADP-4344 For a complete list of all issues resolved in this release, see the list of OADP 1.4.1 resolved issues in Jira. 4.2.1.3.3. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning the error CrashLoopBackoff state after restoring OADP. The StatefulSet controller then recreates these pods and it runs normally. OADP-4407 Deployment referencing ImageStream is not restored properly leading to corrupted pod and volume contents During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. During the restore operation, the OpenShift Container Platform controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB along with the post-hook. For more information about image stream trigger, see Triggering updates on image stream changes . The workaround for this behavior is a two-step restore process: Perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps Once the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps OADP-3954 4.2.1.4. OADP 1.4.0 release notes The OpenShift API for Data Protection (OADP) 1.4.0 release notes lists resolved issues and known issues. 4.2.1.4.1. Resolved issues Restore works correctly in OpenShift Container Platform 4.16 Previously, while restoring the deleted application namespace, the restore operation partially failed with the resource name may not be empty error in OpenShift Container Platform 4.16. With this update, restore works as expected in OpenShift Container Platform 4.16. OADP-4075 Data Mover backups work properly in the OpenShift Container Platform 4.16 cluster Previously, Velero was using the earlier version of SDK where the Spec.SourceVolumeMode field did not exist. As a consequence, Data Mover backups failed in the OpenShift Container Platform 4.16 cluster on the external snapshotter with version 4.2. With this update, external snapshotter is upgraded to version 7.0 and later. As a result, backups do not fail in the OpenShift Container Platform 4.16 cluster. OADP-3922 For a complete list of all issues resolved in this release, see the list of OADP 1.4.0 resolved issues in Jira. 4.2.1.4.2. Known issues Backup fails when checksumAlgorithm is not set for MCG While performing a backup of any application with Noobaa as the backup location, if the checksumAlgorithm configuration parameter is not set, backup fails. To fix this problem, if you do not provide a value for checksumAlgorithm in the Backup Storage Location (BSL) configuration, an empty value is added. The empty value is only added for BSLs that are created using Data Protection Application (DPA) custom resource (CR), and this value is not added if BSLs are created using any other method. OADP-4274 For a complete list of all known issues in this release, see the list of OADP 1.4.0 known issues in Jira. 4.2.1.4.3. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.1.4.3.1. Changes from OADP 1.3 to 1.4 The Velero server has been updated from version 1.12 to 1.14. Note that there are no changes in the Data Protection Application (DPA). This changes the following: The velero-plugin-for-csi code is now available in the Velero code, which means an init container is no longer required for the plugin. Velero changed client Burst and QPS defaults from 30 and 20 to 100 and 100, respectively. The velero-plugin-for-aws plugin updated default value of the spec.config.checksumAlgorithm field in BackupStorageLocation objects (BSLs) from "" (no checksum calculation) to the CRC32 algorithm. For more information, see Velero plugins for AWS Backup Storage Location . The checksum algorithm types are known to work only with AWS. Several S3 providers require the md5sum to be disabled by setting the checksum algorithm to "" . Confirm md5sum algorithm support and configuration with your storage provider. In OADP 1.4, the default value for BSLs created within DPA for this configuration is "" . This default value means that the md5sum is not checked, which is consistent with OADP 1.3. For BSLs created within DPA, update it by using the spec.backupLocations[].velero.config.checksumAlgorithm field in the DPA. If your BSLs are created outside DPA, you can update this configuration by using spec.config.checksumAlgorithm in the BSLs. 4.2.1.4.3.2. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example command USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.1.4.3.3. Upgrading the OADP Operator Use the following procedure when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.3 to stable-1.4 . Wait for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.1.4.4. Converting DPA to the new version To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required. 4.2.1.4.5. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.2.2. OADP 1.3 release notes The release notes for OpenShift API for Data Protection (OADP) 1.3 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. 4.2.2.1. OADP 1.3.6 release notes OpenShift API for Data Protection (OADP) 1.3.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.5. 4.2.2.2. OADP 1.3.5 release notes OpenShift API for Data Protection (OADP) 1.3.5 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.4. 4.2.2.3. OADP 1.3.4 release notes The OpenShift API for Data Protection (OADP) 1.3.4 release notes list resolved issues and known issues. 4.2.2.3.1. Resolved issues The backup spec.resourcepolicy.kind parameter is now case-insensitive Previously, the backup spec.resourcepolicy.kind parameter was only supported with a lower-level string. With this fix, it is now case-insensitive. OADP-2944 Use olm.maxOpenShiftVersion to prevent cluster upgrade to OCP 4.16 version The cluster operator-lifecycle-manager operator must not be upgraded between minor OpenShift Container Platform versions. Using the olm.maxOpenShiftVersion parameter prevents upgrading to OpenShift Container Platform 4.16 version when OADP 1.3 is installed. To upgrade to OpenShift Container Platform 4.16 version, upgrade OADP 1.3 on OCP 4.15 version to OADP 1.4. OADP-4803 BSL and VSL are removed from the cluster Previously, when any Data Protection Application (DPA) was modified to remove the Backup Storage Locations (BSL) or Volume Snapshot Locations (VSL) from the backupLocations or snapshotLocations section, BSL or VSL were not removed from the cluster until the DPA was deleted. With this update, BSL/VSL are removed from the cluster. OADP-3050 DPA reconciles and validates the secret key Previously, the Data Protection Application (DPA) reconciled successfully on the wrong Volume Snapshot Locations (VSL) secret key name. With this update, DPA validates the secret key name before reconciling on any VSL. OADP-3052 Velero's cloud credential permissions are now restrictive Previously, Velero's cloud credential permissions were mounted with the 0644 permissions. As a consequence, any one could read the /credentials/cloud file apart from the owner and group making it easier to access sensitive information such as storage access keys. With this update, the permissions of this file are updated to 0640, and this file cannot be accessed by other users except the owner and group. Warning is displayed when ArgoCD managed namespace is included in the backup A warning is displayed during the backup operation when ArgoCD and Velero manage the same namespace. OADP-4736 The list of security fixes that are included in this release is documented in the RHSA-2024:9960 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.4 resolved issues in Jira. 4.2.2.3.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restore After OADP restores, the Cassandra application pods might enter the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 defaultVolumesToFSBackup and defaultVolumesToFsBackup flags are not identical The dpa.spec.configuration.velero.defaultVolumesToFSBackup flag is not identical to the backup.spec.defaultVolumesToFsBackup flag, which can lead to confusion. OADP-3692 PodVolumeRestore works even though the restore is marked as failed The podvolumerestore continues the data transfer even though the restore is marked as failed. OADP-3039 Velero is unable to skip restoring of initContainer spec Velero might restore the restore-wait init container even though it is not required. OADP-3759 4.2.2.4. OADP 1.3.3 release notes The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues. 4.2.2.4.1. Resolved issues OADP fails when its namespace name is longer than 37 characters When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials secret fails. With this release, the issue has been fixed. OADP-4211 OADP image PullPolicy set to Always In versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always . This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager and Velero pods is set to IfNotPresent . The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira. 4.2.2.4.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.5. OADP 1.3.2 release notes The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues. 4.2.2.5.1. Resolved issues DPA fails to reconcile if a valid custom secret is used for BSL DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials initially. When the custom secret is re-created, it can be used and checked for its existence. OADP-3193 CVE-2023-45290: oadp-velero-container : Golang net/http : Memory exhaustion in Request.ParseMultipartForm A flaw was found in the net/http Golang standard library package, which impacts versions of OADP. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile , limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45290 . CVE-2023-45289: oadp-velero-container : Golang net/http/cookiejar : Incorrect forwarding of sensitive headers and cookies on HTTP redirect A flaw was found in the net/http/cookiejar Golang standard library package, which impacts versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as Authorization or Cookie . A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45289 . CVE-2024-24783: oadp-velero-container : Golang crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24783 . CVE-2024-24784: oadp-velero-plugin-container : Golang net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts versions of OADP. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24784 . CVE-2024-24785: oadp-velero-container : Golang: html/template: errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts versions of OADP. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24785 . For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira. 4.2.2.5.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.6. OADP 1.3.1 release notes The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues. 4.2.2.6.1. New features OADP 1.3.0 Data Mover is now fully supported The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. 4.2.2.6.2. Resolved issues IBM Cloud(R) Object Storage is now supported as a backup storage provider IBM Cloud(R) Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud(R) Object Storage is now supported as an AWS S3 compatible backup storage provider. OADP-3788 OADP operator now correctly reports the missing region error Previously, when you specified profile:default without specifying the region in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region error. OADP-3044 Custom labels are not removed from the openshift-adp namespace Previously, the openshift-adp-controller-manager pod would reset the labels attached to the openshift-adp namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp namespace. OADP-3189 OADP must-gather image collects CRDs Previously, the OADP must-gather image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg tool to extract data in the support shell. With this fix, the must-gather image now collects CRDs shipped by OADP and can use the omg tool to extract data. Garbage collection has the correct description for the default frequency value Previously, the garbage-collection-frequency field had a wrong description for the default frequency value. With this update, garbage-collection-frequency has a correct value of one hour for the gc-controller reconciliation default frequency. OADP-3486 FIPS Mode flag is available in OperatorHub By setting the fips-compliant flag to true , the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled. OADP-3495 CSI plugin does not panic with a nil pointer when csiSnapshotTimeout is set to a short duration Previously, when the csiSnapshotTimeout parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot . For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira. 4.2.2.6.3. Known issues Backup and storage restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power(R) and IBM Z(R) platforms: Storage Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Backup Only the backing up applications with File System Backup such as kopia and restic are supported for backup and restore operations. OADP-3787 Cassandra application pods enter in the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods with any error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.2.7. OADP 1.3.0 release notes The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues. 4.2.2.7.1. New features Velero built-in DataMover Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository. Backing up applications with File System Backup: Kopia or Restic Velero's File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path. Velero allows users to select between the two paths. For backup, specify the path during the installation through the uploader-type flag. The valid value is either restic or kopia . This field defaults to kopia if the value is not specified. The selection cannot be changed after the installation. GCP Cloud authentication Google Cloud Platform (GCP) authentication enables you to use short-lived Google credentials. GCP with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys. AWS ROSA STS authentication You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools. 4.2.2.7.2. Resolved issues ACM applications were removed and re-created on managed clusters after restore Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686 Restic restore was partially failing due to Pod Security standard During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce , which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688 Possible pod volume backup failure if Velero is installed in several namespaces There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308 OADP Velero plugins returning "received EOF, stopping recv loop" message In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176 CVE-2023-39325 Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) In releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption. For more information, see CVE-2023-39325 (Rapid Reset Attack) For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira. 4.2.2.7.3. Known issues CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration The CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . Backup is marked as PartiallyFailed when volumeSnapshotContent CR has an error If any of the VolumeSnapshotContent CRs have an error related to removing the VolumeSnapshotBeingCreated annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed phase. OADP-2871 Performance issues when restoring 30,000 resources for the first time When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update . OADP-3071 Post restore hooks might start running before Datadownload operation has released the related PV Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC). GCP-Workload Identity Federation VSL backup PartiallyFailed VSL backup PartiallyFailed when GCP workload identity is configured on GCP. For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira. 4.2.2.7.4. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.2.7.4.1. Changes from OADP 1.2 to 1.3 The Velero server has been updated from version 1.11 to 1.12. OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover. This changes the following: The spec.features.dataMover field and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from the DataProtectionApplication (DPA) configuration. The Volsync Operator is no longer required for Data Mover functionality, and you can remove it. The custom resource definitions volumesnapshotbackups.datamover.oadp.openshift.io and volumesnapshotrestores.datamover.oadp.openshift.io are no longer required, and you can remove them. The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them. OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic. To employ Kopia, use the new spec.configuration.nodeAgent field as shown in the following example: Example spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... The spec.configuration.restic field is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove the restic key and its values, and use the following new syntax: Example spec: configuration: nodeAgent: enable: true uploaderType: restic # ... Note In a future OADP release, it is planned that the kopia tool will become the default uploaderType value. 4.2.2.7.4.2. Upgrading from OADP 1.2 Technology Preview Data Mover OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3: Procedure If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup. If you require off cluster backups: Back up the applications with a file system backup that uses the --default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackup options. Back up the applications with your object storage plugins, for example, velero-plugin-for-aws . Note The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours. Important To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2. 4.2.2.7.4.3. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.2.7.4.4. Upgrading the OADP Operator Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.2 to stable-1.3 . Allow time for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.2.7.4.5. Converting DPA to the new version If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication (DPA) manifest as follows. Procedure Click Operators Installed Operators and select the OADP Operator. In the Provided APIs section, click View more . Click Create instance in the DataProtectionApplication box. Click YAML View to display the current DPA parameters. Example current DPA spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift # ... Update the DPA parameters: Remove the features.dataMover key and values from the DPA. Remove the VolumeSnapshotMover (VSM) plugin. Add the nodeAgent key and values. Example updated DPA spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift # ... Wait for the DPA to reconcile successfully. 4.2.2.7.4.6. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication (DPA) configuration. Example USD velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true Example apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s # ... 4.3. OADP performance 4.3.1. OADP recommended network settings For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across OpenShift nodes, S3 storage, and in supported cloud environments that meet OpenShift network requirement recommendations. To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions: Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups. Packet loss: 1% Packet corruption: 1% Latency: 100ms Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements. Important Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds. 4.4. OADP features and plugins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.4.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.4.2. OADP plugins The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins. OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots. Table 4.1. OADP plugins OADP plugin Function Storage location aws Backs up and restores Kubernetes objects. AWS S3 Backs up and restores volumes with snapshots. AWS EBS azure Backs up and restores Kubernetes objects. Microsoft Azure Blob storage Backs up and restores volumes with snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects. Google Cloud Storage Backs up and restores volumes with snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources. [1] Object store kubevirt Backs up and restores OpenShift Virtualization resources. [2] Object store csi Backs up and restores volumes with CSI snapshots. [3] Cloud storage that supports CSI snapshots vsm VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] Object store Mandatory. Virtual machine disks are backed up with CSI snapshots or Restic. The csi plugin uses the Kubernetes CSI snapshot API. OADP 1.1 or later uses snapshot.storage.k8s.io/v1 OADP 1.0 uses snapshot.storage.k8s.io/v1beta1 OADP 1.2 only. 4.4.3. About OADP Velero plugins You can configure two types of plugins when you install Velero: Default cloud provider plugins Custom plugins Both types of plugin are optional, but most users configure at least one cloud provider plugin. 4.4.3.1. Default Velero cloud provider plugins You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plugin) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plugins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.4.3.2. Custom Velero plugins You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. 4.4.4. Supported architectures for OADP OpenShift API for Data Protection (OADP) supports the following architectures: AMD64 ARM64 PPC64le s390x Note OADP 1.2.0 and later versions support the ARM64 architecture. 4.4.5. OADP support for IBM Power and IBM Z OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power(R) and to IBM Z(R). OADP 1.1.7 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.1.7 in terms of backup locations for these systems. OADP 1.2.3 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.2.3 in terms of backup locations for these systems. OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems. OADP 1.4.3 was tested successfully against OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.4.3 in terms of backup locations for these systems. 4.4.5.1. OADP support for target backup locations using IBM Power IBM Power(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.12, 4.13. 4.14, and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.3 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2. OADP testing and support for target backup locations using IBM Z IBM Z(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.12, 4.13, 4.14 and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.3 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue. 4.4.6. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.4.6.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.4.6.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.5. OADP use cases 4.5.1. Backup using OpenShift API for Data Protection and Red Hat OpenShift Data Foundation (ODF) Following is a use case for using OADP and ODF to back up an application. 4.5.1.1. Backing up an application using OADP and ODF In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF). You create a object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location. You use the NooBaa MCG service with OADP by using the aws provider plugin. You configure the Data Protection Application (DPA) with the backup storage location (BSL). You create a backup custom resource (CR) and specify the application namespace to back up. You create and verify the backup. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example: Example OBC apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 The name of the object bucket claim. 2 The name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> 1 1 Specify the file name of the object bucket claim manifest. When you create an OBC, ODF creates a secret and a config map with the same name as the object bucket claim. The secret has the bucket credentials, and the config map has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 test-obc is the name of the OBC. Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the generated secret , run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Get the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace by running the following command: USD oc get route s3 -n openshift-storage Create a cloud-credentials file with the object bucket credentials as shown in the following command: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content as shown in the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Configure the Data Protection Application (DPA) as shown in the following example: Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp 1 Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. 2 This is the S3 URL of ODF storage. 3 Specify the bucket name. Create the DPA by running the following command: USD oc apply -f <dpa_filename> Verify that the DPA is created successfully by running the following command. In the example output, you can see the status object has type field set to Reconciled . This means, the DPA is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.2. OpenShift API for Data Protection (OADP) restore use case Following is a use case for using OADP to restore a backup to a different namespace. 4.5.2.1. Restoring an application to a different namespace using OADP Restore a backup of an application by using OADP to a new target namespace, test-restore-application . To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources. Prerequisites You installed the OADP Operator. You have the backup of an application to be restored. Procedure Create a restore CR as shown in the following example: Example restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3 1 The name of the restore CR. 2 Specify the name of the backup. 3 namespaceMapping maps the source application namespace to the target application namespace. Specify the application namespace that you backed up. test-restore-application is the target namespace where you want to restore the backup. Apply the restore CR by running the following command: USD oc apply -f <restore_cr_filename> Verification Verify that the restore is in the Completed phase by running the following command: USD oc describe restores.velero.io <restore_name> -n openshift-adp Change to the restored namespace test-restore-application by running the following command: USD oc project test-restore-application Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command: USD oc get pvc,svc,deployment,secret,configmap Example output NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s 4.5.3. Including a self-signed CA certificate during backup You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF). 4.5.3.1. Backing up an application and its self-signed CA certificate The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA. To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks: Request a NooBaa bucket by creating an object bucket claim (OBC). Extract the bucket details. Include a self-signed CA certificate in the DataProtectionApplication CR. Back up an application. Prerequisites You installed the OADP Operator. You installed the ODF Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest to request a NooBaa bucket as shown in the following example: Example ObjectBucketClaim CR apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 Specifies the name of the object bucket claim. 2 Specifies the name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> When you create an OBC, ODF creates a secret and a ConfigMap with the same name as the object bucket claim. The secret object contains the bucket credentials, and the ConfigMap object contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 The name of the OBC is test-obc . Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the secret object, run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Create a cloud-credentials file with the object bucket credentials by using the following example configuration: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content by running the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Extract the service CA certificate from the openshift-service-ca.crt config map by running the following command. Ensure that you encode the certificate in Base64 format and note the value to use in the step. USD oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo Example output LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K Configure the DataProtectionApplication CR manifest file with the bucket name and CA certificate as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "false" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3 1 The insecureSkipTLSVerify flag can be set to either true or false . If set to "true", SSL/TLS security is disabled. If set to false , SSL/TLS security is enabled. 2 Specify the name of the bucket extracted in an earlier step. 3 Copy and paste the Base64 encoded certificate from the step. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command: USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure the Backup CR by using the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the Backup object is in the Completed phase by running the following command: USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.4. Using the legacy-aws Velero plugin If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups. 4.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application. Note Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins . Prerequisites You have installed the OADP Operator. You have configured an AWS S3-compatible object storage as a backup location. You have an application with a database running in a separate namespace. Procedure Configure the DataProtectionApplication CR to use the legacy-aws Velero plugin as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp 1 Use the legacy-aws plugin. 2 Specify the bucket name. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command. In the example output, you can see the status object has the type field set to Reconciled and the status field set to "True" . That status indicates that the DataProtectionApplication CR is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a Backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backups.velero.io test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.6. Installing and configuring OADP 4.6.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway IBM Cloud(R) Object Storage S3 AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO You can configure multiple backup storage locations within the same namespace for each individual OADP deployment. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB). To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage. You create a default Secret and then you install the Data Protection Application. 4.6.1.1. AWS S3 compatible backup storage providers OADP is compatible with many object storage providers for use with different backup and snapshot operations. Several object storage providers are fully supported, several are unsupported but known to work, and some have known limitations. 4.6.1.1.1. Supported backup storage providers The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations: MinIO Multicloud Object Gateway (MCG) Amazon Web Services (AWS) S3 IBM Cloud(R) Object Storage S3 Ceph RADOS Gateway (Ceph Object Gateway) Red Hat Container Storage Red Hat OpenShift Data Foundation Google Cloud Platform (GCP) Microsoft Azure Note Google Cloud Platform (GCP) and Microsoft Azure have their own Velero object store plugins. 4.6.1.1.2. Unsupported backup storage providers The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat: Oracle Cloud DigitalOcean NooBaa, unless installed using Multicloud Object Gateway (MCG) Tencent Cloud Quobyte Cloudian HyperStore Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . 4.6.1.1.3. Backup storage providers with known limitations The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set: Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore. 4.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store. Warning Failure to configure MCG as an external object store might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud . Additional resources Overview of backup and snapshot locations in the Velero documentation 4.6.1.3. About OADP update channels When you install an OADP Operator, you choose an update channel . This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time. The following update channels are available: The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for OADP.v1.1.z and older versions from OADP.v1.0.z . The stable-1.0 channel is deprecated and is not supported. The stable-1.1 channel is deprecated and is not supported. The stable-1.2 channel is deprecated and is not supported. The stable-1.3 channel contains OADP.v1.3.z , the most recent OADP 1.3 ClusterServiceVersion . The stable-1.4 channel contains OADP.v1.4.z , the most recent OADP 1.4 ClusterServiceVersion . For more information, see OpenShift Operator Life Cycles . Which update channel is right for you? The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from OADP.v1.1.z . Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z. When must you switch update channels? If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z. If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z. If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z. Note You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it. 4.6.1.4. Installation of OADP on multiple namespaces You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI). You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements: All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported. Each individual deployment of OADP must have a unique set of credentials and at least one BackupStorageLocation configuration. You can also use multiple BackupStorageLocation configurations within the same namespace. By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently. Additional resources Cluster service version 4.6.1.5. Velero CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources. 4.6.1.5.1. CPU and memory requirement for configurations Configuration types [1] Average usage [2] Large usage resourceTimeouts CSI Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi N/A Restic [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi 900m [5] Data Mover N/A N/A 10m - average usage 60m - large usage Average usage - use these settings for most usage situations. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations. Increasing the CPU has a significant impact on improving backup and restore times. Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m. Note The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above. 4.6.1.5.2. NodeAgent CPU for large usage Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP). Important It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia's aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. You can set these limits in Ceph MDS pods by following the procedure in Changing the CPU and memory resources on the rook-ceph pods . You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: resources: mds: limits: cpu: "3" memory: 128Gi requests: cpu: "3" memory: 8Gi 4.6.2. Installing the OADP Operator You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.15 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.14 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.6.2.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.3.0 1.12 4.12-4.15 1.3.1 1.12 4.12-4.15 1.3.2 1.12 4.12-4.15 1.3.3 1.12 4.12-4.15 1.3.4 1.12 4.12-4.15 1.3.5 1.12 4.12-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 1.4.3 1.14 4.14-4.18 4.6.3. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.3.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud Amazon Simple Storage Service (Amazon S3) is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web. You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service. You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources. AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following: You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region. If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls. IIn AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions, arn:aws-us-gov . If you need to specify the US-West or US-East region, use one the following ARNs: For US-West, use us-gov-west-1 . For US-East, use us-gov-east-1 . For all other standard regions, ARNs begin with: arn:aws . In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas . If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com . To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US) . 4.6.3.2. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.6.3.3. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.3.3.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.3.3.2. Creating profiles for different credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.6.3.3.3. Configuring the backup storage location using AWS You can configure the AWS backup storage location (BSL) as shown in the following example procedure. Prerequisites You have created an object storage bucket using AWS. You have installed the OADP Operator. Procedure Configure the BSL custom resource (CR) with values as applicable to your use case. Backup storage location apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: "true" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: "50..c-4da1-419f-a16e-ei...49f" 12 customerKeyEncryptionFile: "/credentials/customer-key" 13 signatureVersion: "1" 14 profile: "default" 15 insecureSkipTLSVerify: "true" 16 enableSharedConfig: "true" 17 tagging: "" 18 checksumAlgorithm: "CRC32" 19 1 1 The name of the object store plugin. In this example, the plugin is aws . This field is required. 2 The name of the bucket in which to store backups. This field is required. 3 The prefix within the bucket in which to store backups. This field is optional. 4 The credentials for the backup storage location. You can set custom credentials. If custom credentials are not set, the default credentials' secret is used. 5 The key within the secret credentials' data. 6 The name of the secret containing the credentials. 7 The AWS region where the bucket is located. Optional if s3ForcePathStyle is false. 8 A boolean flag to decide whether to use path-style addressing instead of virtual hosted bucket addressing. Set to true if using a storage service such as MinIO or NooBaa. This is an optional field. The default value is false . 9 You can specify the AWS S3 URL here for explicitness. This field is primarily for storage services such as MinIO or NooBaa. This is an optional field. 10 This field is primarily used for storage services such as MinIO or NooBaa. This is an optional field. 11 The name of the server-side encryption algorithm to use for uploading objects, for example, AES256 . This is an optional field. 12 Specify an AWS KMS key ID. You can format, as shown in the example, as an alias, such as alias/<KMS-key-alias-name> , or the full ARN to enable encryption of the backups stored in S3. Note that kmsKeyId cannot be used in with customerKeyEncryptionFile . This is an optional field. 13 Specify the file that has the SSE-C customer key to enable customer key encryption of the backups stored in S3. The file must contain a 32-byte string. The customerKeyEncryptionFile field points to a mounted secret within the velero container. Add the following key-value pair to the velero cloud-credentials secret: customer-key: <your_b64_encoded_32byte_string> . Note that the customerKeyEncryptionFile field cannot be used with the kmsKeyId field. The default value is an empty string ( "" ), which means SSE-C is disabled. This is an optional field. 14 The version of the signature algorithm used to create signed URLs. You use signed URLs to download the backups, or fetch the logs. Valid values are 1 and 4 . The default version is 4 . This is an optional field. 15 The name of the AWS profile in the credentials file. The default value is default . This is an optional field. 16 Set the insecureSkipTLSVerify field to true if you do not want to verify the TLS certificate when connecting to the object store, for example, for self-signed certificates with MinIO. Setting to true is susceptible to man-in-the-middle attacks and is not recommended for production workloads. The default value is false . This is an optional field. 17 Set the enableSharedConfig field to true if you want to load the credentials file as a shared config file. The default value is false . This is an optional field. 18 Specify the tags to annotate the AWS S3 objects. Specify the tags in key-value pairs. The default value is an empty string ( "" ). This is an optional field. 19 Specify the checksum algorithm to use for uploading objects to S3. The supported values are: CRC32 , CRC32C , SHA1 , and SHA256 . If you set the field as an empty string ( "" ), the checksum check will be skipped. The default value is CRC32 . This is an optional field. 4.6.3.3.4. Creating an OADP SSE-C encryption key for additional data security Amazon Web Services (AWS) S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption. The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption. You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed. Warning Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key. Prerequisites To make OADP mount a secret that contains your SSE-C key to the Velero pod at /credentials , use the following default secret name for AWS: cloud-credentials , and leave at least one of the following labels empty: dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credential This is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971 . Note The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting. If you need the backup location to have credentials with a different name than cloud-credentials , you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the example does not contain a credential name, the snapshot location will use cloud-credentials as its secret for taking snapshots. Example snapshot location in a DPA without credentials specified snapshotLocations: - velero: config: profile: default region: <region> provider: aws # ... Procedure Create an SSE-C encryption key: Generate a random number and save it as a file named sse.key by running the following command: USD dd if=/dev/urandom bs=1 count=32 > sse.key Encode the sse.key by using Base64 and save the result as a file named sse_encoded.key by running the following command: USD cat sse.key | base64 > sse_encoded.key Link the file named sse_encoded.key to a new file named customer-key by running the following command: USD ln -s sse_encoded.key customer-key Create an OpenShift Container Platform secret: If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command: USD oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key If you are updating an existing installation, edit the values of the cloud-credential secret block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret # ... Edit the value of the customerKeyEncryptionFile attribute in the backupLocations block of the DataProtectionApplication CR manifest, as in the following example: spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default # ... Warning You must restart the Velero pod to remount the secret credentials properly on an existing installation. The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key. Verification To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it. Create a test file by running the following command: USD echo "encrypt me please" > test.txt Upload the test file by running the following command: USD aws s3api put-object \ --bucket <bucket> \ --key test.txt \ --body test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 Try to download the file. In either the Amazon web console or the terminal, run the following command: USD s3cmd get s3://<bucket>/test.txt test.txt The download fails because the file is encrypted with an additional key. Download the file with the additional encryption key by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ downloaded.txt Read the file contents by running the following command: USD cat downloaded.txt Example output encrypt me please Additional resources You can also download the file with the additional encryption key backed up with Velcro by running a different command. See Downloading a file with an SSE-C encryption key for files backed up by Velero . 4.6.3.3.4.1. Downloading a file with an SSE-C encryption key for files backed up by Velero When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velcro. Procedure Download the file with the additional encryption key for files backed up by Velero by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ --debug \ velero_download.tar.gz 4.6.3.4. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.3.4.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.3.4.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.3.4.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.3.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: "default" s3ForcePathStyle: "true" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: "default" credential: key: cloud name: cloud-credentials 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 9 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 10 Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage. 11 Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage. 12 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 13 Specify a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs. 14 The snapshot location must be in the same region as the PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the credentials-velero file. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.3.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.3.6. Configuring the backup storage location with a MD5 checksum algorithm You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA. CRC32 CRC32C SHA1 SHA256 Note You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32 . Prerequisites You have installed the OADP Operator. You have configured Amazon S3, or S3-compatible object storage as a backup location. Procedure Configure the BSL in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: "" 1 insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi 1 Specify the checksumAlgorithm . In this example, the checksumAlgorithm field is set to an empty value. You can select an option from the following list: CRC32 , CRC32C , SHA1 , SHA256 . Important If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration. The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method. 4.6.3.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.3.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.3.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.3.9.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.3.9.2. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . 4.6.4. Configuring the OpenShift API for Data Protection with IBM Cloud You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups. 4.6.4.1. Configuring the COS instance You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials. Prerequisites You have an IBM Cloud Platform account. You installed the IBM Cloud CLI . You are logged in to IBM Cloud. Procedure Install the IBM Cloud Object Storage (COS) plugin by running the following command: USD ibmcloud plugin install cos -f Set a bucket name by running the following command: USD BUCKET=<bucket_name> Set a bucket region by running the following command: USD REGION=<bucket_region> 1 1 Specify the bucket region, for example, eu-gb . Create a resource group by running the following command: USD ibmcloud resource group-create <resource_group_name> Set the target resource group by running the following command: USD ibmcloud target -g <resource_group_name> Verify that the target resource group is correctly set by running the following command: USD ibmcloud target Example output API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default In the example output, the resource group is set to Default . Set a resource group name by running the following command: USD RESOURCE_GROUP=<resource_group> 1 1 Specify the resource group name, for example, "default" . Create an IBM Cloud service-instance resource by running the following command: USD ibmcloud resource service-instance-create \ <service_instance_name> \ 1 <service_name> \ 2 <service_plan> \ 3 <region_name> 4 1 Specify a name for the service-instance resource. 2 Specify the service name. Alternatively, you can specify a service ID. 3 Specify the service plan for your IBM Cloud account. 4 Specify the region name. Example command USD ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ 1 standard \ global \ -d premium-global-deployment 2 1 The service name is cloud-object-storage . 2 The -d flag specifies the deployment name. Extract the service instance ID by running the following command: USD SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id') Create a COS bucket by running the following command: USD ibmcloud cos bucket-create \// --bucket USDBUCKET \// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \// --region USDREGION Variables such as USDBUCKET , USDSERVICE_INSTANCE_ID , and USDREGION are replaced by the values you set previously. Create HMAC credentials by running the following command. USD ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true} Extract the access key ID and the secret access key from the HMAC credentials and save them in the credentials-velero file. You can use the credentials-velero file to create a secret for the backup storage location. Run the following command: USD cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__ 4.6.4.2. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.4.3. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.4.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5 1 The provider is aws when you use IBM Cloud as a backup storage location. 2 Specify the IBM Cloud Object Storage (COS) bucket name. 3 Specify the COS region name, for example, eu-gb . 4 Specify the S3 URL of the COS bucket. For example, http://s3.eu-gb.cloud-object-storage.appdomain.cloud . Here, eu-gb is the region name. Replace the region name according to your bucket region. 5 Defines the name of the secret you created by using the access key and the secret access key from the HMAC credentials. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.4.5. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.6.4.6. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.4.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.4.8. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.4.9. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.4.10. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". 4.6.5. Configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Azure for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.5.1. Configuring Microsoft Azure You configure Microsoft Azure for OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools. This identity is used for access to resources. Create a service principal Sign in using a service principal and password Sign in using a service principal and certificate Manage service principal roles Create an Azure resource using a service principal Reset service principal credentials For more details, see Create an Azure service principal with Azure CLI . 4.6.5.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.5.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-azure . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.5.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" provider: azure 1 Backup location Secret with custom name. 4.6.5.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.5.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.5.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.5.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.5.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify the Azure resource group. 9 Specify the Azure storage account ID. 10 Specify the Azure subscription ID. 11 If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 14 You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.5.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.5.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.5.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.5.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.5.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.6. Configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure GCP for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.6.1. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.6.6.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.6.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-gcp . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.6.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.6.6.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.6.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.6.4. Google workload identity federation cloud authentication Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed. With Google's workload identity federation, you can use Identity and Access Management (IAM) to offer IAM roles, including the ability to impersonate service accounts, to external identities. This eliminates the maintenance and security risks associated with service account keys. Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application. Note Google workload identity federation is available for OADP 1.3.x and later. When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots. OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues . If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application . Prerequisites You have installed a cluster in manual mode with GCP Workload Identity configured . You have access to the Cloud Credential Operator utility ( ccoctl ) and to the associated workload identity pool. Procedure Create an oadp-credrequest directory by running the following command: USD mkdir -p oadp-credrequest Create a CredentialsRequest.yaml file as following: echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml Use the ccoctl utility to process the CredentialsRequest objects in the oadp-credrequest directory by running the following command: USD ccoctl gcp create-service-accounts \ --name=<name> \ --project=<gcp_project_id> \ --credentials-requests-dir=oadp-credrequest \ --workload-identity-pool=<pool_id> \ --workload-identity-provider=<provider_id> The manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml file is now available to use in the following steps. Create a namespace by running the following command: USD oc create namespace <OPERATOR_INSTALL_NS> Apply the credentials to the namespace by running the following command: USD oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml 4.6.6.4.1. Google workload identity federation known issues Volume Snapshot Location (VSL) backups finish with a PartiallyFailed phase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups. 4.6.6.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Secret key that contains credentials. For Google workload identity federation cloud authentication use service_account.json . 9 Secret name that contains credentials. If you do not specify this value, the default name, cloud-credentials-gcp , is used. 10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 11 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 12 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 13 The snapshot location must be in the same region as the PVs. 14 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. 15 Google workload identity federation supports internal image backup. Set this field to false if you do not want to use image backup. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.6.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.6.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.6.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.6.7.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.6.7.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.7. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Multicloud Object Gateway as a backup location. MCG is a component of OpenShift Data Foundation. You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.7.1. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.6.7.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.7.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.7.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: profile: "default" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Specify the region, following the naming convention of the documentation of your object storage server. 2 Backup location Secret with custom name. 4.6.7.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.7.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.7.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.7.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.7.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: "default" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 The openshift plugin is mandatory. 4 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 5 The administrative agent that routes the administrative requests to servers. 6 Set this value to true if you want to enable nodeAgent and perform File System Backup. 7 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 8 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 9 Specify the region, following the naming convention of the documentation of your object storage server. 10 Specify the URL of the S3 endpoint. 11 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.7.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.7.6. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.7.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.7.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.7.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Performance tuning guide for Multicloud Object Gateway . Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.8. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location. Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.8.1. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. Additional resources Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.1.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials , unless your backup storage provider has a default plugin, such as aws , azure , or gcp . In that case, the default name is specified in the provider-specific OADP installation procedure. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.8.1.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.8.2. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.8.2.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.8.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to Red Hat OpenShift Data Foundation (ODF). If working with ODF, consult the appropriate tuning guides for official recommendations. 4.6.8.2.1.1.1. CPU and memory requirement for configurations Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested: Configuration types Request Max limit CPU Request changed to 3 Max limit to 3 Memory Request changed to 8 Gi Max limit to 128 Gi 4.6.8.2.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 # ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.8.2.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.8.3. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 Optional: The kubevirt plugin is used with OpenShift Virtualization. 4 Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.8.4. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.8.5. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.8.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.8.5.2. Creating an Object Bucket Claim for disaster recovery on OpenShift Data Foundation If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console. Warning Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.5.3. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.8.5.4. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.9. Configuring the OpenShift API for Data Protection with OpenShift Virtualization You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application. Back up and restore virtual machines by using the OpenShift API for Data Protection . Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backups and restores For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.9.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 4.6.9.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Warning If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots. 4.6.9.3. Backing up a single VM If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label. Prerequisites You have installed the OADP Operator. You have multiple VMs running in a namespace. You have added the kubevirt plugin in the DataProtectionApplication (DPA) custom resource (CR). You have configured the BackupStorageLocation CR in the DataProtectionApplication CR and BackupStorageLocation is available. Procedure Configure the Backup CR as shown in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3 1 Specify the name of the namespace where you have created the VMs. 2 Specify the VM name that needs to be backed up. 3 Specify the name of the BackupStorageLocation CR. To create a Backup CR, run the following command: USD oc apply -f <backup_cr_file_name> 1 1 Specify the name of the Backup CR file. 4.6.9.4. Restoring a single VM After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM. Prerequisites You have installed the OADP Operator. You have backed up a single VM by using the label selector. Procedure Configure the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true 1 Specifies the name of the backup of a single VM. To restore the single VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.5. Restoring a single VM from a backup of multiple VMs If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM. Prerequisites You have installed the OADP Operator. You have labeled the VMs that need to be backed up. You have a backup of multiple VMs. Procedure Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command: USD oc label vm <vm_name> app=<vm_name> -n openshift-adp Configure the label selectors in the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2 1 Specify the UID of DataVolume of the VM that you want to restore. For example, b6... 53a-ddd7-4d9d-9407-a0c... e5 . 2 Specify the name of the VM that you want to restore. For example, test-vm . To restore a VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.9.7. Overriding the imagePullPolicy setting in the DPA In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images. In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly: If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent . If the image does not have the digest, the Operator sets imagePullPolicy to Always . You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA). Prerequisites You have installed the OADP Operator. Procedure Configure the spec.imagePullPolicy field in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1 1 Specify the value for imagePullPolicy . In this example, the imagePullPolicy field is set to Never . 4.6.9.7.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.9.8. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.2. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.3. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . Important Red Hat only supports the combination of OADP versions 1.3.0 and later, and OpenShift Virtualization versions 4.14 and later. OADP versions before 1.3.0 are not supported for back up and restore of OpenShift Virtualization. 4.6.10. Configuring the OpenShift API for Data Protection (OADP) with more than one Backup Storage Location You can configure one or more backup storage locations (BSLs) in the Data Protection Application (DPA). You can also select the location to store the backup in when you create the backup. With this configuration, you can store your backups in the following ways: To different regions To a different storage provider OADP supports multiple credentials for configuring more than one BSL, so that you can specify the credentials to use with any BSL. 4.6.10.1. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.10.2. OADP use case for two BSLs In this use case, you configure the DPA with two storage locations by using two cloud credentials. You back up an application with a database by using the default BSL. OADP stores the backup resources in the default BSL. You then backup the application again by using the second BSL. Prerequisites You must install the OADP Operator. You must configure two backup storage locations: AWS S3 and Multicloud Object Gateway (MCG). You must have an application with a database deployed on a Red Hat OpenShift cluster. Procedure Create the first Secret for the AWS S3 storage provider with the default name by running the following command: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1 1 Specify the name of the cloud credentials file for AWS S3. Create the second Secret for MCG with a custom name by running the following command: USD oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1 1 Specify the name of the cloud credentials file for MCG. Note the name of the mcg-secret custom secret. Configure the DPA with the two BSLs as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: "true" profile: noobaa region: <region_name> 3 s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws 1 Specify the AWS region for the bucket. 2 Specify the AWS S3 bucket name. 3 Specify the region, following the naming convention of the documentation of MCG. 4 Specify the URL of the S3 endpoint for MCG. 5 Specify the name of the custom secret for MCG storage. 6 Specify the MCG bucket name. Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Verify that the BSLs are available by running the following command: USD oc get bsl Example output NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s Create a backup CR with the default BSL. Note In the following example, the storageLocation field is not specified in the backup CR. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the default BSL by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Create a backup CR by using MCG as the BSL. In the following example, note that the second storageLocation value is specified at the time of backup CR creation. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. 2 Specify the second storage location. Create a second backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the storage location as MCG by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Additional resources Creating profiles for different credentials 4.6.11. Configuring the OpenShift API for Data Protection (OADP) with more than one Volume Snapshot Location You can configure one or more Volume Snapshot Locations (VSLs) to store the snapshots in different cloud provider regions. 4.6.11.1. Configuring the DPA with more than one VSL You configure the DPA with more than one VSL and specify the credentials provided by the cloud provider. Make sure that you configure the snapshot location in the same region as the persistent volumes. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #... 1 Specify the region. The snapshot location must be in the same region as the persistent volumes. 2 Specify the custom credential name. 4.7. Uninstalling OADP 4.7.1. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.8. OADP backing up 4.8.1. Backing up applications Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR . The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots . For more information about CSI volume snapshots, see CSI volume snapshots . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Kopia or Restic. See Backing up applications with File System Backup: Kopia or Restic . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Important The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. 4.8.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks . You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups using Schedule CR . 4.8.1.2. Known issues OpenShift Container Platform 4.15 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases. For more information, see Restic restore partially failing on OCP 4.15 due to changed PSA policy . Additional resources Installing Operators on clusters for administrators Installing Operators in namespaces for non-administrators 4.8.2. Creating a Backup CR You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. 6 Map of {key,value} pairs of backup resources that have all the specified labels. 7 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.8.3. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots . For more information, see Creating a Backup CR . Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: Example configuration file apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3 1 Must be set to true . 2 If you are restoring this volume in another cluster with the same driver, make sure that you set the snapshot.storage.kubernetes.io/is-default-class parameter to false instead of setting it to true . Otherwise, the restore will partially fail. 3 OADP supports the Retain and Delete deletion policy types for CSI and Data Mover backup and restore. steps You can now create a Backup CR. 4.8.4. Backing up applications with File System Backup: Kopia or Restic You can use OADP to back up and restore Kubernetes volumes attached to pods from the file system of the volumes. This process is called File System Backup (FSB) or Pod Volume Backup (PVB). It is accomplished by using modules from the open source backup tools Restic or Kopia. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using FSB. Note Restic is installed by the OADP Operator by default. If you prefer, you can install Kopia instead. FSB integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volumes. This integration is an additional capability of OADP and is not a replacement for existing functionality. You back up Kubernetes resources, internal images, and persistent volumes with Kopia or Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Note In OADP version 1.3 and later, you can use either Kopia or Restic for backing up applications. For the Built-in DataMover, you must use Kopia. In OADP version 1.2 and earlier, you can only use Restic for backing up applications. Important FSB does not support backing up hostPath volumes. For more information, see FSB limitations . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default nodeAgent installation by setting spec.configuration.nodeAgent.enable to false in the DataProtectionApplication CR. You must select Kopia or Restic as the uploader by setting spec.configuration.nodeAgent.uploaderType to kopia or restic in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Create the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1 ... 1 In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true setting within the spec block. In OADP version 1.1, add defaultVolumesToRestic: true . 4.8.5. Creating backup hooks When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up. The commands can be configured to performed before any custom action processing ( Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up ( Post hooks). You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all objects. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entry point for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.8.6. Scheduling backups using Schedule CR The schedule operation allows you to create a backup of your data at a particular time, specified by a Cron expression. You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF Note To schedule a backup at specific intervals, enter the <duration_in_minutes> in the following format: schedule: "*/10 * * * *" Enter the minutes value between quotation marks ( " " ). 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true key-value pair to your configuration when performing backups of volumes with Restic. In OADP version 1.1, add the defaultVolumesToRestic: true key-value pair when you back up volumes with Restic. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.8.7. Deleting backups You can delete a backup by creating the DeleteBackupRequest custom resource (CR) or by running the velero backup delete command as explained in the following procedures. The volume backup artifacts are deleted at different times depending on the backup method: Restic: The artifacts are deleted in the full maintenance cycle, after the backup is deleted. Container Storage Interface (CSI): The artifacts are deleted immediately when the backup is deleted. Kopia: The artifacts are deleted after three full maintenance cycles of the Kopia repository, after the backup is deleted. 4.8.7.1. Deleting a backup by creating a DeleteBackupRequest CR You can delete a backup by creating a DeleteBackupRequest custom resource (CR). Prerequisites You have run a backup of your application. Procedure Create a DeleteBackupRequest CR manifest file: apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1 1 Specify the name of the backup. Apply the DeleteBackupRequest CR to delete the backup: USD oc apply -f <deletebackuprequest_cr_filename> 4.8.7.2. Deleting a backup by using the Velero CLI You can delete a backup by using the Velero CLI. Prerequisites You have run a backup of your application. You downloaded the Velero CLI and can access the Velero binary in your cluster. Procedure To delete the backup, run the following Velero command: USD velero backup delete <backup_name> -n openshift-adp 1 1 Specify the name of the backup. 4.8.7.3. About Kopia repository maintenance There are two types of Kopia repository maintenance: Quick maintenance Runs every hour to keep the number of index blobs (n) low. A high number of indexes negatively affects the performance of Kopia operations. Does not delete any metadata from the repository without ensuring that another copy of the same metadata exists. Full maintenance Runs every 24 hours to perform garbage collection of repository contents that are no longer needed. snapshot-gc , a full maintenance task, finds all files and directory listings that are no longer accessible from snapshot manifests and marks them as deleted. A full maintenance is a resource-costly operation, as it requires scanning all directories in all snapshots that are active in the cluster. 4.8.7.3.1. Kopia maintenance in OADP The repo-maintain-job jobs are executed in the namespace where OADP is installed, as shown in the following example: pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m You can check the logs of the repo-maintain-job for more details about the cleanup and the removal of artifacts in the backup object storage. You can find a note, as shown in the following example, in the repo-maintain-job when the full cycle maintenance is due: not due for full maintenance cycle until 2024-00-00 18:29:4 Important Three successful executions of a full maintenance cycle are required for the objects to be deleted from the backup object storage. This means you can expect up to 72 hours for all the artifacts in the backup object storage to be deleted. 4.8.7.4. Deleting a backup repository After you delete the backup, and after the Kopia repository maintenance cycles to delete the related artifacts are complete, the backup is no longer referenced by any metadata or manifest objects. You can then delete the backuprepository custom resource (CR) to complete the backup deletion process. Prerequisites You have deleted the backup of your application. You have waited up to 72 hours after the backup is deleted. This time frame allows Kopia to run the repository maintenance cycles. Procedure To get the name of the backup repository CR for a backup, run the following command: USD oc get backuprepositories.velero.io -n openshift-adp To delete the backup repository CR, run the following command: USD oc delete backuprepository <backup_repository_name> -n openshift-adp 1 1 Specify the name of the backup repository from the earlier step. 4.8.8. About Kopia Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice. Kopia supports network and local storage locations, and many cloud or remote storage locations, including: Amazon S3 and any cloud storage that is compatible with S3 Azure Blob Storage Google Cloud Storage platform Kopia uses content-addressable storage for snapshots: Snapshots are always incremental; data that is already included in snapshots is not re-uploaded to the repository. A file is only uploaded to the repository again if it is modified. Stored data is deduplicated; if multiple copies of the same file exist, only one of them is stored. If files are moved or renamed, Kopia can recognize that they have the same content and does not upload them again. 4.8.8.1. OADP integration with Kopia OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia . If you do not specify an uploaderType , OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository. The following example shows a DataProtectionApplication CR configured for using Kopia: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... 4.9. OADP restoring 4.9.1. Restoring applications You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR . You can create restore hooks to run commands in a container in a pod by editing the Restore CR. See Creating restore hooks . 4.9.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. 4.9.1.2. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you restore DeploymentConfig with volumes or if you use post-restore hooks, run the dc-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh -> dc-post-restore.sh Note During the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow the restore and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales any DeploymentConfig objects back up to the appropriate number of replicas. Example 4.1. dc-restic-post-restore.sh dc-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo "restore: USD1" label=USD(label_name USD1) echo "label: USDlabel" echo Deleting disconnected restore pods oc delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done 4.9.1.3. Creating restore hooks You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum length of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . Important During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. This happens because, during the restore operation, OpenShift controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB and the post restore hook. For more information about image stream trigger, see "Triggering updates on image stream changes". The workaround for this behavior is a two-step restore process: First, perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps After the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps Additional resources Triggering updates on image stream changes 4.10. OADP and ROSA 4.10.1. Backing up applications on ROSA clusters using OADP You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to back up and restore application data. ROSA is a fully-managed, turnkey application platform that allows you to deliver value to your customers by building and deploying applications. ROSA provides seamless integration with a wide range of Amazon Web Services (AWS) compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivery of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After you create your clusters, you can operate your clusters with the OpenShift Container Platform web console or through Red Hat OpenShift Cluster Manager . You can also use ROSA with OpenShift APIs and command-line interface (CLI) tools. For additional information about ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough . Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials Install the OADP Operator and give it an IAM role 4.10.1.1. Preparing AWS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Procedure Create the following environment variables by running the following commands: Important Change the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing. USD export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} echo "Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" 1 Replace my-cluster with your ROSA cluster name. On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following command: USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) 1 1 Replace RosaOadp with your policy name. Enter the following command to create the policy JSON file and then create the policy in ROSA: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUploads", "s3:ListMultipartUploadParts", "s3:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name "RosaOadpVer1" \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \ --output text) fi 1 SCRATCH is a name for a temporary directory created for the environment variables. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version":2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create the role by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" \ --policy-arn USD{POLICY_ARN} 4.10.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. OpenShift Container Platform (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on ROSA with AWS STS. Important Restic is unsupported. Kopia file system backup (FSB) is supported when backing up file systems that do not have Container Storage Interface (CSI) snapshotting support. Example file systems include the following: Amazon Elastic File System (EFS) Network File System (NFS) emptyDir volumes Local volumes For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an Amazon ROSA cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform ROSA cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF 1 Replace <aws_region> with the AWS region to use for the STS endpoint. Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important The enable parameter of restic is set to false in this configuration, because OADP does not support Restic in ROSA environments. If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. 4.10.1.3. Updating the IAM role ARN in the OADP Operator subscription While installing the OADP Operator on a ROSA Security Token Service (STS) cluster, if you provide an incorrect IAM role Amazon Resource Name (ARN), the openshift-adp-controller pod gives an error. The credential requests that are generated contain the wrong IAM role ARN. To update the credential requests object with the correct IAM role ARN, you can edit the OADP Operator subscription and patch the IAM role ARN with the correct value. By editing the OADP Operator subscription, you do not have to uninstall and reinstall OADP to update the IAM role ARN. Prerequisites You have a Red Hat OpenShift Service on AWS STS cluster with the required access and tokens. You have installed OADP on the ROSA STS cluster. Procedure To verify that the OADP subscription has the wrong IAM role ARN environment variable set, run the following command: USD oc get sub -o yaml redhat-oadp-operator Example subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: "2025-01-15T07:18:31Z" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: "" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: "77363" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2 1 Verify the value of ROLEARN you want to update. Update the ROLEARN field of the subscription with the correct role ARN by running the following command: USD oc patch subscription redhat-oadp-operator -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "<role_arn>"}]}}}' --type='merge' where: <role_arn> Specifies the IAM role ARN to be updated. For example, arn:aws:iam::160... ..6956:role/oadprosa... ..8wlf . Verify that the secret object is updated with correct role ARN value by running the following command: USD oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d Example output [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token Configure the DataProtectionApplication custom resource (CR) manifest file as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift 1 Specify the CloudStorage CR. Create the DataProtectionApplication CR by running the following command: USD oc create -f <dpa_manifest_file> Verify that the DataProtectionApplication CR is reconciled and the status is set to "True" by running the following command: USD oc get dpa -n openshift-adp -o yaml Example DataProtectionApplication apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... status: conditions: - lastTransitionTime: "2023-07-31T04:48:12Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled Verify that the BackupStorageLocation CR is in an available state by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example BackupStorageLocation NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true Additional resources Installing from OperatorHub using the web console . Backing up applications 4.10.1.4. Example: Backing up workload on OADP ROSA STS, with an optional cleanup 4.10.1.4.1. Performing a backup with OADP and ROSA STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) STS. Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup is completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.10.1.4.2. Cleaning up a cluster after a backup with OADP and ROSA STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Warning If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3 run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.11. OADP and AWS STS 4.11.1. Backing up applications on AWS STS using OADP You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. You can install OADP on an AWS Security Token Service (STS) (AWS STS) cluster manually. Amazon AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. You use STS to provide trusted users with temporary access to resources via API calls, your AWS console, or the AWS command line interface (CLI). Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials. Install the OADP Operator and give it an IAM role. 4.11.1.1. Preparing AWS STS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Prepare the AWS credentials by using the following procedure. Procedure Define the cluster_name environment variable by running the following command: USD export CLUSTER_NAME= <AWS_cluster_name> 1 1 The variable can be set to any value. Retrieve all of the details of the cluster such as the AWS_ACCOUNT_ID, OIDC_ENDPOINT by running the following command: USD export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" Create a temporary directory to store all of the files by running the following command: USD export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} Display all of the gathered details by running the following command: USD echo "Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following commands: USD export POLICY_NAME="OadpVer1" 1 1 The variable can be set to any value. USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}" --output text) Enter the following command to create the policy JSON file and then create the policy: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp \ --output text) 1 fi 1 SCRATCH is a name for a temporary directory created for storing the files. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create an IAM role trust policy for the cluster by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn USD{POLICY_ARN} 4.11.1.1.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: # ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.11.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. This document describes how to install OpenShift API for Data Protection (OADP) on an AWS STS cluster manually. Important Restic and Kopia are not supported in the OADP AWS STS environment. Verify that the Restic and Kopia node agent is disabled. For backing up volumes, OADP on AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an AWS cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in AWS STS clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform AWS STS cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF 1 Set this field to false if you do not want to use image backup. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. Additional resources Installing from OperatorHub using the web console Backing up applications 4.11.1.3. Backing up workload on OADP AWS STS, with an optional cleanup 4.11.1.3.1. Performing a backup with OADP and AWS STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) (AWS STS). Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup has completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.11.1.3.2. Cleaning up a cluster after a backup with OADP and AWS STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Important If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator by running the following command: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3, run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.12. OADP and 3scale 4.12.1. Backing up and restoring 3scale by using OADP With Red Hat 3scale API Management (APIM), you can manage your APIs for internal or external users. Share, secure, distribute, control, and monetize your APIs on an infrastructure platform built with performance, customer control, and future growth in mind. You can deploy 3scale components on-premise, in the cloud, as a managed service, or in any combination based on your requirement. Note In this example, the non-service affecting approach is used to back up and restore 3scale on-cluster storage by using the OpenShift API for Data Protection (OADP) Operator. Additionally, ensure that you are restoring 3scale on the same cluster where it was backed up from. If you want to restore 3scale on a different cluster, ensure that both clusters are using the same custom domain. Prerequisites You installed and configured Red Hat 3scale. For more information, see Red Hat 3scale API Management . 4.12.1.1. Creating the Data Protection Application You can create a Data Protection Application (DPA) custom resource (CR) for 3scale. For more information on DPA, see "Installing the Data Protection Application". Procedure Create a YAML file with the following configuration: Example dpa.yaml file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: "default" s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 1 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 2 Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes. 3 Specify a region for backup storage location. 4 Specify the URL of the object store that you are using to store backups. Create the DPA CR by running the following command: USD oc create -f dpa.yaml steps Back up the 3scale Operator. Additional resources Installing the Data Protection Application 4.12.1.2. Backing up the 3scale Operator You can back up the Operator resources, and Secret and APIManager custom resources (CR). For more information, see "Creating a Backup CR". Prerequisites You created the Data Protection Application (DPA). Procedure Back up the Operator resources, such as operatorgroup , namespaces , and subscriptions , by creating a YAML file with the following configuration: Example backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s 1 Namespace where the 3scale Operator is installed. Note You can also back up and restore ReplicationControllers , Deployment , and Pod objects to ensure that all manually set environments are backed up and restored. This does not affect the flow of restoration. Create a backup CR by running the following command: USD oc create -f backup.yaml Back up the Secret CR by creating a YAML file with the following configuration: Example backup-secret.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Create the Secret CR by running the following command: USD oc create -f backup-secret.yaml Back up the APIManager CR by creating a YAML file with the following configuration: Example backup-apimanager.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1 Create the APIManager CR by running the following command: USD oc create -f backup-apimanager.yaml steps Back up the mysql database. Additional resources Creating a Backup CR 4.12.1.3. Backing up the mysql database You can back up the mysql database by creating and attaching a persistent volume claim (PVC) to include the dumped data in the specified path. Prerequisites You have backed up the 3scale operator. Procedure Create a YAML file with the following configuration for adding an additional PVC: Example ts_pvc.yaml file kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem Create the additional PVC by running the following command: USD oc create -f ts_pvc.yml Attach the PVC to the system database pod by editing the system database deployment to use the mysql dump: USD oc edit deployment system-mysql -n threescale volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra ... serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1 ... 1 The PVC that contains the dumped data. Create a YAML file with following configuration to back up the mysql database: Example mysql.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s 1 A directory where the data is backed up. 2 Resources to back up. Back up the mysql database by running the following command: USD oc create -f mysql.yaml Verification Verify that the mysql backup is completed by running the following command: USD oc get backups.velero.io mysql-backup Example output NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s steps Back up the back-end Redis database. 4.12.1.4. Backing up the back-end Redis database You can back up the Redis database by adding the required annotations and by listing which resources to back up using the includedResources parameter. Prerequisites You backed up the 3scale Operator. You backed up the mysql database. The Redis queues have been drained before performing the backup. Procedure Edit the annotations on the backend-redis deployment by running the following command: USD oc edit deployment backend-redis -n threescale Add the following annotations: annotations: post.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 100"] pre.hook.backup.velero.io/command: >- ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage 0"] Create a YAML file with the following configuration to back up the Redis database: Example redis-backup.yaml file apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s Back up the Redis database by running the following command: USD oc get backups.velero.io redis-backup -o yaml Verification Verify that the Redis backup is completed by running the following command:: USD oc get backups.velero.io steps Restore the Secrets and APIManager CRs. 4.12.1.5. Restoring the secrets and APIManager You can restore the Secrets and APIManager by using the following procedure. Prerequisites You backed up the 3scale Operator. You backed up mysql and Redis databases. You are restoring the database on the same cluster, where it was backed up. If it is on a different cluster, install and configure OADP with nodeAgent enabled on the destination cluster as it was on the source cluster. Procedure Delete the 3scale Operator custom resource definitions (CRDs) along with the threescale namespace by running the following command: USD oc delete project threescale Example output "threescale" project deleted successfully Create a YAML file with the following configuration to restore the 3scale Operator: Example restore.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the 3scale Operator by running the following command: USD oc create -f restore.yaml Manually create the s3-credentials Secret object by running the following command: USD oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF 1 Replace <ID_123456> with your AWS credentials ID. 2 Replace <ID_98765544> with your AWS credentials KEY. 3 Replace <mybucket.example.com> with your target bucket name. 4 Replace <us-east-1> with the AWS region of your bucket. Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Create a YAML file with the following configuration to restore the Secrets: Example restore-secret.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s Restore the Secrets by running the following command: USD oc create -f restore-secrets.yaml Create a YAML file with the following configuration to restore APIManager: Example restore-apimanager.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s 1 The resources that you do not want to restore. Restore the APIManager by running the following command: USD oc create -f restore-apimanager.yaml Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale steps Restore the mysql database. 4.12.1.6. Restoring the mysql database Restoring the mysql database re-creates the following resources: The Pod , ReplicationController , and Deployment objects. The additional persistent volumes (PVs) and associated persistent volume claims (PVCs). The mysql dump, which the example-claim PVC contains. Warning Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted. Prerequisites You restored the Secret and APIManager custom resources (CR). Procedure Scale down the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale Example output: deployment.apps/threescale-operator-controller-manager-v2 scaled Create the following script to scale down the 3scale operator: USD vi ./scaledowndeployment.sh Example output: for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done Scale down all the deployment 3scale components by running the following script: USD ./scaledowndeployment.sh Example output: deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled Delete the system-mysql Deployment object by running the following command: USD oc delete deployment system-mysql -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "system-mysql" deleted Create the following YAML file to restore the mysql database: Example restore-mysql.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true 1 A path where the data is restored from. Restore the mysql database by running the following command: USD oc create -f restore-mysql.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m Verify that the additional PVC has been restored by running the following command: USD oc get pvc -n threescale Example output: NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m steps Restore the back-end Redis database. 4.12.1.7. Restoring the back-end Redis database You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore. Prerequisites You restored the Secret and APIManager custom resources. You restored the mysql database. Procedure Delete the backend-redis deployment by running the following command: USD oc delete deployment backend-redis -n threescale Example output: Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "backend-redis" deleted Create a YAML file with the following configuration to restore the Redis database: Example restore-backend.yaml file apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true Restore the Redis database by running the following command: USD oc create -f restore-backend.yaml Verification Verify that the PodVolumeRestore restore is completed by running the following command: USD oc get podvolumerestores.velero.io -n openshift-adp Example output: NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m steps Scale the 3scale Operator and deployment. 4.12.1.8. Scaling up the 3scale Operator and deployment You can scale up the 3scale Operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state. Prerequisites Ensure that there are no scaled up deployments or no extra pods running. There might be some system-mysql or backend-redis pods running detached from deployments after restoration, which can be removed after the restoration is successful. Procedure Scale up the 3scale Operator by running the following command: USD oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale Ensure that the 3scale Operator was deployed by running the following command: USD oc get deployment -n threescale Scale up the deployments by executing the following script: USD ./scaledeployment.sh Get the 3scale-admin route to log in to the 3scale UI by running the following command: USD oc get routes -n threescale Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None In this example, 3scale-admin.apps.custom-cluster-name.openshift.com is the 3scale-admin URL. Use the URL from this output to log in to the 3scale Operator as an administrator. You can verify that the existing data is available before trying to create a backup. 4.13. OADP Data Mover 4.13.1. About the OADP Data Mover OpenShift API for Data Protection (OADP) includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository. OADP supports CSI snapshots on the following: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API 4.13.1.1. Data Mover support The OADP built-in Data Mover, which was introduced in OADP 1.3 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. Supported The Data Mover backups taken with OADP 1.3 can be restored using OADP 1.3, 1.4, and later. This is supported. Not supported Backups taken with OADP 1.1 or OADP 1.2 using the Data Mover feature cannot be restored using OADP 1.3 and later. Therefore, it is not supported. OADP 1.1 and OADP 1.2 are no longer supported. The DataMover feature in OADP 1.1 or OADP 1.2 was a Technology Preview and was never supported. DataMover backups taken with OADP 1.1 or OADP 1.2 cannot be restored on later versions of OADP. 4.13.1.2. Enabling the built-in Data Mover To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository. Example DataProtectionApplication manifest apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI # ... 1 The flag to enable the node agent. 2 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. 3 The CSI plugin included in the list of default plugins. 4 In OADP 1.3.1 and later, set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 4.13.1.3. Built-in Data Mover controller and custom resource definitions (CRDs) The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore: DataDownload : Represents a data download of a volume snapshot. The CSI plugin creates one DataDownload object per volume to be restored. The DataDownload CR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. DataUpload : Represents a data upload of a volume snapshot. The CSI plugin creates one DataUpload object per CSI snapshot. The DataUpload CR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. BackupRepository : Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested. 4.13.1.4. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.4. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.5. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . 4.13.2. Backing up and restoring CSI snapshots data movement You can back up and restore persistent volumes by using the OADP 1.3 Data Mover. 4.13.2.1. Backing up persistent volumes with CSI snapshots You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. Prerequisites You have access to the cluster with the cluster-admin role. You have installed the OADP Operator. You have included the CSI plugin and enabled the node agent in the DataProtectionApplication custom resource (CR). You have an application with persistent volumes running in a separate namespace. You have added the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR. Procedure Create a YAML file for the Backup object, as in the following example: Example Backup CR kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1 # ... 1 Set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 2 Set to true to enable movement of CSI snapshots to remote object storage. 3 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Note If you format the volume by using XFS filesystem and the volume is at 100% capacity, the backup fails with a no space left on device error. For example: Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on device In this scenario, consider resizing the volume or using a different filesystem type, for example, ext4 , so that the backup completes successfully. Apply the manifest: USD oc create -f backup.yaml A DataUpload CR is created after the snapshot creation is complete. Verification Verify that the snapshot data is successfully transferred to the remote object store by monitoring the status.phase field of the DataUpload CR. Possible values are In Progress , Completed , Failed , or Canceled . The object store is configured in the backupLocations stanza of the DataProtectionApplication CR. Run the following command to get a list of all DataUpload objects: USD oc get datauploads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal Check the value of the status.phase field of the specific DataUpload object by running the following command: USD oc get datauploads <dataupload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: "" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: "2023-11-02T16:57:02Z" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: "2023-11-02T16:56:22Z" 1 Indicates that snapshot data is successfully transferred to the remote object store. 4.13.2.2. Restoring CSI volume snapshots You can restore a volume snapshot by creating a Restore CR. Note You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. Prerequisites You have access to the cluster with the cluster-admin role. You have an OADP Backup CR from which to restore the data. Procedure Create a YAML file for the Restore CR, as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup> # ... Apply the manifest: USD oc create -f restore.yaml A DataDownload CR is created when the restore starts. Verification You can monitor the status of the restore process by checking the status.phase field of the DataDownload CR. Possible values are In Progress , Completed , Failed , or Canceled . To get a list of all DataDownload objects, run the following command: USD oc get datadownloads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal Enter the following command to check the value of the status.phase field of the specific DataDownload object: USD oc get datadownloads <datadownload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: "" pvc: mysql status: completionTimestamp: "2023-11-02T17:01:24Z" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: "2023-11-02T17:00:52Z" 1 Indicates that the CSI snapshot data is successfully restored. 4.13.2.3. Deletion policy for OADP 1.3 The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information. 4.13.2.3.1. Deletion policy guidelines for OADP 1.3 Review the following deletion policy guidelines for the OADP 1.3: In OADP 1.3.x, when using any type of backup and restore methods, you can set the deletionPolicy field to Retain or Delete in the VolumeSnapshotClass custom resource (CR). 4.13.3. Overriding Kopia hashing, encryption, and splitter algorithms You can override the default values of Kopia hashing, encryption, and splitter algorithms by using specific environment variables in the Data Protection Application (DPA). 4.13.3.1. Configuring the DPA to override Kopia hashing, encryption, and splitter algorithms You can use an OpenShift API for Data Protection (OADP) option to override the default Kopia algorithms for hashing, encryption, and splitter to improve Kopia performance or to compare performance metrics. You can set the following environment variables in the spec.configuration.velero.podConfig.env section of the DPA: KOPIA_HASHING_ALGORITHM KOPIA_ENCRYPTION_ALGORITHM KOPIA_SPLITTER_ALGORITHM Prerequisites You have installed the OADP Operator. You have created the secret by using the credentials provided by the cloud provider. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the DPA with the environment variables for hashing, encryption, and splitter as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6 1 Enable the nodeAgent . 2 Specify the uploaderType as kopia . 3 Include the csi plugin. 4 Specify a hashing algorithm. For example, BLAKE3-256 . 5 Specify an encryption algorithm. For example, CHACHA20-POLY1305-HMAC-SHA256 . 6 Specify a splitter algorithm. For example, DYNAMIC-8M-RABINKARP . 4.13.3.2. Use case for overriding Kopia hashing, encryption, and splitter algorithms The use case example demonstrates taking a backup of an application by using Kopia environment variables for hashing, encryption, and splitter. You store the backup in an AWS S3 bucket. You then verify the environment variables by connecting to the Kopia repository. Prerequisites You have installed the OADP Operator. You have an AWS S3 bucket configured as the backup storage location. You have created the secret by using the credentials provided by the cloud provider. You have installed the Kopia client. You have an application with persistent volumes running in a separate namespace. Procedure Configure the Data Protection Application (DPA) as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8 1 Specify a name for the DPA. 2 Specify the region for the backup storage location. 3 Specify the name of the default Secret object. 4 Specify the AWS S3 bucket name. 5 Include the csi plugin. 6 Specify the hashing algorithm as BLAKE3-256 . 7 Specify the encryption algorithm as CHACHA20-POLY1305-HMAC-SHA256 . 8 Specify the splitter algorithm as DYNAMIC-8M-RABINKARP . Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Create a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Verification Connect to the Kopia repository by running the following command: USD kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<aws_s3_access_key>" \ 4 --secret-access-key="<aws_s3_secret_access_key>" \ 5 1 Specify the AWS S3 bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the AWS S3 access key. 5 Specify the AWS S3 storage provider secret access key. Note If you are using a storage provider other than AWS S3, you will need to add --endpoint , the bucket endpoint URL parameter, to the command. Verify that Kopia uses the environment variables that are configured in the DPA for the backup by running the following command: USD kopia repository status Example output Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> # ... Storage type: s3 Storage capacity: unbounded Storage config: { "bucket": <bucket_name>, "prefix": "velero/kopia/<application_namespace>/", "endpoint": "s3.amazonaws.com", "accessKeyID": <access_key>, "secretAccessKey": "****************************************", "sessionToken": "" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3 # ... 4.13.3.3. Benchmarking Kopia hashing, encryption, and splitter algorithms You can run Kopia commands to benchmark the hashing, encryption, and splitter algorithms. Based on the benchmarking results, you can select the most suitable algorithm for your workload. In this procedure, you run the Kopia benchmarking commands from a pod on the cluster. The benchmarking results can vary depending on CPU speed, available RAM, disk speed, current I/O load, and so on. Prerequisites You have installed the OADP Operator. You have an application with persistent volumes running in a separate namespace. You have run a backup of the application with Container Storage Interface (CSI) snapshots. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the must-gather pod as shown in the following example. Make sure you are using the oadp-mustgather image for OADP version 1.3 and later. Example pod configuration apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: ["sleep"] args: ["infinity"] Note The Kopia client is available in the oadp-mustgather image. Create the pod by running the following command: USD oc apply -f <pod_config_file_name> 1 1 Specify the name of the YAML file for the pod configuration. Verify that the Security Context Constraints (SCC) on the pod is anyuid , so that Kopia can connect to the repository. USD oc describe pod/oadp-mustgather-pod | grep scc Example output openshift.io/scc: anyuid Connect to the pod via SSH by running the following command: USD oc -n openshift-adp rsh pod/oadp-mustgather-pod Connect to the Kopia repository by running the following command: sh-5.1# kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<access_key>" \ 4 --secret-access-key="<secret_access_key>" \ 5 --endpoint=<bucket_endpoint> \ 6 1 Specify the object storage provider bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the object storage provider access key. 5 Specify the object storage provider secret access key. 6 Specify the bucket endpoint. You do not need to specify the bucket endpoint, if you are using AWS S3 as the storage provider. Note This is an example command. The command can vary based on the object storage provider. To benchmark the hashing algorithm, run the following command: sh-5.1# kopia benchmark hashing Example output Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256 To benchmark the encryption algorithm, run the following command: sh-5.1# kopia benchmark encryption Example output Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256 To benchmark the splitter algorithm, run the following command: sh-5.1# kopia benchmark splitter Example output splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 # ... FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # ... 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 4.14. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs and CR information by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.14.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Velero website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP and OpenShift Container Platform. 4.14.1.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.3.0 1.12 4.12-4.15 1.3.1 1.12 4.12-4.15 1.3.2 1.12 4.12-4.15 1.3.3 1.12 4.12-4.15 1.3.4 1.12 4.12-4.15 1.3.5 1.12 4.12-4.15 1.4.0 1.14 4.14-4.18 1.4.1 1.14 4.14-4.18 1.4.2 1.14 4.14-4.18 1.4.3 1.14 4.14-4.18 4.14.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.14.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use the info logLevel value for most logs. 4.14.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.14.5. Pods crash or restart due to lack of memory or CPU If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources. Additional resources CPU and memory requirements 4.14.5.1. Setting resource requests for a Velero pod You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Velero file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi 1 The resourceAllocations listed are for average usage. 4.14.5.2. Setting resource requests for a Restic pod You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Restic file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi 1 The resourceAllocations listed are for average usage. Important The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations , the default resources specification for a Velero pod or a Restic pod is as follows: requests: cpu: 500m memory: 128Mi 4.14.6. PodVolumeRestore fails to complete when StorageClass is NFS The restore operation fails when there is more than one volume during a NFS restore by using Restic or Kopia . PodVolumeRestore either fails with the following error or keeps trying to restore before finally failing. Error message Velero: pod volume restore failed: data path restore failed: \ Failed to run kopia restore: Failed to copy snapshot data to the target: \ restore error: copy file: error creating file: \ open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: \ no such file or directory Cause The NFS mount path is not unique for the two volumes to restore. As a result, the velero lock files use the same file on the NFS server during the restore, causing the PodVolumeRestore to fail. Solution You can resolve this issue by setting up a unique pathPattern for each volume, while defining the StorageClass for nfs-subdir-external-provisioner in the deploy/class.yaml file. Use the following nfs-subdir-external-provisioner StorageClass example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: "USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}" 1 onDelete: delete 1 Specifies a template for creating a directory path by using PVC metadata such as labels, annotations, name, or namespace. To specify metadata, use USD{.PVC.<metadata>} . For example, to name a folder: <pvc-namespace>-<pvc-name> , use USD{.PVC.namespace}-USD{.PVC.name} as pathPattern . 4.14.7. Issues with Velero and admission webhooks Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources. For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use. 4.14.7.1. Restoring workarounds for Velero backups that use admission webhooks This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks. 4.14.7.1.1. Restoring Knative resources You might encounter problems using Velero to back up Knative resources that use admission webhooks. You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks. Procedure Restore the top level service.serving.knavtive.dev Service resource: USD velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev 4.14.7.1.2. Restoring IBM AppConnect resources If you experience issues when you use Velero to a restore an IBM(R) AppConnect resource that has an admission webhook, you can run the checks in this procedure. Procedure Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster: USD oc get mutatingwebhookconfigurations Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation . Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator. 4.14.7.2. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.14.7.2.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.14.7.2.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.14.7.2.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.14.7.2.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.14.7.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. Additional resources Admission plugins Webhook admission plugins Types of webhook admission plugins 4.14.8. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.14.8.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.14.8.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.14.9. OADP Operator issues The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve. 4.14.9.1. OADP Operator fails silently The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace> , you see that the Operator has a status of Running . In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running. Cause The problem is caused when cloud credentials provide insufficient permissions. Solution Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues. Procedure Run one of the following commands to retrieve a list of BSLs: Using the OpenShift CLI: USD oc get backupstoragelocations.velero.io -A Using the Velero CLI: USD velero backup-location get -n <OADP_Operator_namespace> Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error. USD oc get backupstoragelocations.velero.io -n <namespace> -o yaml Example result apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: "2023-11-03T19:49:04Z" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: "24273698" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: "true" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: "2023-11-10T22:06:46Z" message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54" phase: Unavailable kind: List metadata: resourceVersion: "" 4.14.10. OADP timeouts Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures. Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance. The following are various OADP timeouts, with instructions of how and when to implement these parameters: 4.14.10.1. Restic timeout The spec.configuration.nodeAgent.timeout parameter defines the Restic timeout. The default value is 1h . Use the Restic timeout parameter in the nodeAgent section for the following scenarios: For Restic backups with total PV data usage that is greater than 500GB. If backups are timing out with the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" Procedure Edit the values in the spec.configuration.nodeAgent.timeout block of the DataProtectionApplication custom resource (CR) manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h # ... 4.14.10.2. Velero resource timeout resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m . Use the resourceTimeout for the following scenarios: For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete. A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task. To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia. To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup. Procedure Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m # ... 4.14.10.3. Data Mover timeout timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore . The default value is 10m . Use the Data Mover timeout for the following scenarios: If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes. For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h . With the VolumeSnapshotMover (VSM) plugin. Only with OADP 1.1.x. Procedure Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m # ... 4.14.10.4. CSI snapshot timeout CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse , before returning error as timeout. The default value is 10m . Use the CSISnapshotTimeout for the following scenarios: With the CSI plugin. For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs. Note Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes. Procedure Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m # ... 4.14.10.5. Velero default item operation timeout defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h . Use the defaultItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature. When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout , it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections. Procedure Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h # ... 4.14.10.6. Item operation timeout - restore ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h . Use the restore ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h # ... 4.14.10.7. Item operation timeout - backup ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h . Use the backup ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h # ... 4.14.11. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.14.11.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.14.11.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. View the Velero backup details USD velero backup describe <backup-name> --details 4.14.11.3. Backup CR status remains in PartiallyFailed The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created. Cause If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following: time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq Solution Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp If required, clean up the stored data on the BackupStorageLocation to free up space. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object: USD oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true Create a new Backup CR. 4.14.12. Restic issues You might encounter these issues when you back up applications with Restic. 4.14.12.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.nodeAgent.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # ... spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1 # ... 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.14.12.2. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location? . Cause Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information. Solution Remove the related Restic repository from the namespace by running the following command: USD oc delete resticrepository openshift-adp <name_of_the_restic_repository> In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity. time="2021-12-29T18:29:14Z" level=info msg="1 errors encountered backup up item" backup=velero/backup65 logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds time="2021-12-29T18:29:14Z" level=error msg="Error backing up item" backup=velero/backup65 error="pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \n: exit status 1" error.file="/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184" error.function="github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds 4.14.12.3. Restic restore partially failing on OCP 4.14 due to changed PSA policy OpenShift Container Platform 4.14 enforces a Pod Security Admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. If a SecurityContextConstraints (SCC) resource is not found when a pod is created, and the PSA policy on the pod is not set up to meet the required standards, pod admission is denied. This issue arises due to the resource restore order of Velero. Sample error \"level=error\" in line#2273: time=\"2023-06-12T06:50:04Z\" level=error msg=\"error restoring mysql-869f9f44f6-tp5lv: pods\\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity\\\ "restricted:v1.24\\\": privil eged (container \\\"mysql\\\ " must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/restore/restore.go:1388\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n velero container contains \"level=error\" in line#2447: time=\"2023-06-12T06:50:05Z\" level=error msg=\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\ "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\ "mysql\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\ "restic-wait\\\",\\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\ "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\ "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n]", Solution In your DPA custom resource (CR), check or set the restore-resource-priorities field on the Velero server to ensure that securitycontextconstraints is listed in order before pods in the list of resources: USD oc get dpa -o yaml Example DPA CR # ... configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift 1 If you have an existing restore resource priority list, ensure you combine that existing list with the complete list. Ensure that the security standards for the application pods are aligned, as provided in Fixing PodSecurity Admission warnings for deployments , to prevent deployment warnings. If the application is not aligned with security standards, an error can occur regardless of the SCC. Note This solution is temporary, and ongoing discussions are in progress to address it. Additional resources Fixing PodSecurity Admission warnings for deployments 4.14.13. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can run the must-gather tool with the following data collection options: Full must-gather data collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. Essential must-gather data collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. must-gather data collection with timeout. Data collection can take a long time if there are many failed Backup CRs. You can improve performance by setting a timeout value. Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. You must use Red Hat Enterprise Linux (RHEL) 9 with OADP 1.3 and OADP 1.4. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: Full must-gather data collection, including Prometheus metrics: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . Essential must-gather data collection, without Prometheus metrics, for a specific time duration: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all . For example, gather_1h_essential or gather_all_essential . For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . must-gather data collection with timeout: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. Prometheus metrics data dump: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Additional resources Gathering cluster data 4.14.13.1. Using must-gather with insecure TLS connections If a custom CA certificate is used, the must-gather pod fails to grab the output for velero logs/describe . To use the must-gather tool with insecure TLS connections, you can pass the gather_without_tls flag to the must-gather command. Procedure Pass the gather_without_tls flag, with value set to true , to the must-gather tool by using the following command: For OADP 1.3, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false> For OADP 1.4, run the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false> By default, the flag value is set to false . Set the value to true to allow insecure TLS connections. 4.14.13.2. Combining options when using the must-gather tool Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following examples: For OADP 1.3: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> For OADP 1.4: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> In these examples, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls . The only other variables that you can specify this way are the following: logs_since , with a default value of 72h request_timeout , with a default value of 0s If DataProtectionApplication custom resource (CR) is configured with s3Url and insecureSkipTLS: true , the CR does not collect the necessary logs because of a missing CA certificate. To collect those logs, run the must-gather command with the following option: For OADP 1.3: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true For OADP 1.4: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true 4.14.14. OADP Monitoring The OpenShift Container Platform provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs. Additional resources About OpenShift Container Platform monitoring 4.14.14.1. OADP monitoring setup The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end. With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics. Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have created a cluster monitoring config map. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring Add or enable the enableUserWorkload option in the data section's config.yaml field: apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata: # ... 1 Add this option or set to true Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace: USD oc get pods -n openshift-user-workload-monitoring Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring . If it exists, skip the remaining steps in this procedure. USD oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name: Example output apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the 2_configure_user_workload_monitoring.yaml file: USD oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created 4.14.14.2. Creating OADP service monitor OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service. Get details about the service by running the following commands: Procedure Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object. USD oc get svc -n openshift-adp -l app.kubernetes.io/name=velero Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml . The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides. Example ServiceMonitor object apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: "velero" Apply the 3_create_oadp_service_monitor.yaml file: USD oc apply -f 3_create_oadp_service_monitor.yaml Example output servicemonitor.monitoring.coreos.com/oadp-service-monitor created Verification Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console: Navigate to the Observe Targets page. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field. Verify that the status for the Status for the service monitor is Up . Figure 4.1. OADP metrics targets 4.14.14.3. Creating an alerting rule The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring. Procedure Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml . Sample OADPBackupFailing alert apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0 for: 5m labels: severity: warning In this sample, the Alert displays under the following conditions: There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes. If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace: USD oc apply -f 4_create_oadp_alert_rule.yaml Example output prometheusrule.monitoring.coreos.com/sample-oadp-alert created Verification After the Alert is triggered, you can view it in the following ways: In the Developer perspective, select the Observe menu. In the Administrator perspective under the Observe Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed. Figure 4.2. OADP backup failing alert Additional resources Managing alerts as an Administrator 4.14.14.4. List of available metrics These are the list of metrics provided by the OADP together with their Types . Metric name Description Type kopia_content_cache_hit_bytes Number of bytes retrieved from the cache Counter kopia_content_cache_hit_count Number of times content was retrieved from the cache Counter kopia_content_cache_malformed Number of times malformed content was read from the cache Counter kopia_content_cache_miss_count Number of times content was not found in the cache and fetched Counter kopia_content_cache_missed_bytes Number of bytes retrieved from the underlying storage Counter kopia_content_cache_miss_error_count Number of times content could not be found in the underlying storage Counter kopia_content_cache_store_error_count Number of times content could not be saved in the cache Counter kopia_content_get_bytes Number of bytes retrieved using GetContent() Counter kopia_content_get_count Number of times GetContent() was called Counter kopia_content_get_error_count Number of times GetContent() was called and the result was an error Counter kopia_content_get_not_found_count Number of times GetContent() was called and the result was not found Counter kopia_content_write_bytes Number of bytes passed to WriteContent() Counter kopia_content_write_count Number of times WriteContent() was called Counter velero_backup_attempt_total Total number of attempted backups Counter velero_backup_deletion_attempt_total Total number of attempted backup deletions Counter velero_backup_deletion_failure_total Total number of failed backup deletions Counter velero_backup_deletion_success_total Total number of successful backup deletions Counter velero_backup_duration_seconds Time taken to complete backup, in seconds Histogram velero_backup_failure_total Total number of failed backups Counter velero_backup_items_errors Total number of errors encountered during backup Gauge velero_backup_items_total Total number of items backed up Gauge velero_backup_last_status Last status of the backup. A value of 1 is success, 0. Gauge velero_backup_last_successful_timestamp Last time a backup ran successfully, Unix timestamp in seconds Gauge velero_backup_partial_failure_total Total number of partially failed backups Counter velero_backup_success_total Total number of successful backups Counter velero_backup_tarball_size_bytes Size, in bytes, of a backup Gauge velero_backup_total Current number of existent backups Gauge velero_backup_validation_failure_total Total number of validation failed backups Counter velero_backup_warning_total Total number of warned backups Counter velero_csi_snapshot_attempt_total Total number of CSI attempted volume snapshots Counter velero_csi_snapshot_failure_total Total number of CSI failed volume snapshots Counter velero_csi_snapshot_success_total Total number of CSI successful volume snapshots Counter velero_restore_attempt_total Total number of attempted restores Counter velero_restore_failed_total Total number of failed restores Counter velero_restore_partial_failure_total Total number of partially failed restores Counter velero_restore_success_total Total number of successful restores Counter velero_restore_total Current number of existent restores Gauge velero_restore_validation_failed_total Total number of failed restores failing validations Counter velero_volume_snapshot_attempt_total Total number of attempted volume snapshots Counter velero_volume_snapshot_failure_total Total number of failed volume snapshots Counter velero_volume_snapshot_success_total Total number of successful volume snapshots Counter 4.14.14.5. Viewing metrics using the Observe UI You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project. Procedure Navigate to the Observe Metrics page: If you are using the Developer perspective, follow these steps: Select Custom query , or click on the Show PromQL link. Type the query and click Enter . If you are using the Administrator perspective, type the expression in the text field and select Run Queries . Figure 4.3. OADP metrics query 4.15. APIs used with OADP The document provides information about the following APIs that you can use with OADP: Velero API OADP API 4.15.1. Velero API Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types . 4.15.2. OADP API The following tables provide the structure of the OADP API: Table 4.6. DataProtectionApplicationSpec Property Type Description backupLocations [] BackupLocation Defines the list of configurations to use for BackupStorageLocations . snapshotLocations [] SnapshotLocation Defines the list of configurations to use for VolumeSnapshotLocations . unsupportedOverrides map [ UnsupportedImageKey ] string Can be used to override the deployed dependent images for development. Options are veleroImageFqin , awsPluginImageFqin , openshiftPluginImageFqin , azurePluginImageFqin , gcpPluginImageFqin , csiPluginImageFqin , dataMoverImageFqin , resticRestoreImageFqin , kubevirtPluginImageFqin , and operator-type . podAnnotations map [ string ] string Used to add annotations to pods deployed by Operators. podDnsPolicy DNSPolicy Defines the configuration of the DNS of a pod. podDnsConfig PodDNSConfig Defines the DNS parameters of a pod in addition to those generated from DNSPolicy . backupImages * bool Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. configuration * ApplicationConfig Used to define the data protection application's server configuration. features * Features Defines the configuration for the DPA to enable the Technology Preview features. Complete schema definitions for the OADP API . Table 4.7. BackupLocation Property Type Description velero * velero.BackupStorageLocationSpec Location to store volume snapshots, as described in Backup Storage Location . bucket * CloudStorageLocation [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. Important The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Complete schema definitions for the type BackupLocation . Table 4.8. SnapshotLocation Property Type Description velero * VolumeSnapshotLocationSpec Location to store volume snapshots, as described in Volume Snapshot Location . Complete schema definitions for the type SnapshotLocation . Table 4.9. ApplicationConfig Property Type Description velero * VeleroConfig Defines the configuration for the Velero server. restic * ResticConfig Defines the configuration for the Restic server. Complete schema definitions for the type ApplicationConfig . Table 4.10. VeleroConfig Property Type Description featureFlags [] string Defines the list of features to enable for the Velero instance. defaultPlugins [] string The following types of default Velero plugins can be installed: aws , azure , csi , gcp , kubevirt , and openshift . customPlugins [] CustomPlugin Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins restoreResourcesVersionPriority string Represents a config map that is created if defined for use in conjunction with the EnableAPIGroupVersions feature flag. Defining this field automatically adds EnableAPIGroupVersions to the Velero server feature flag. noDefaultBackupLocation bool To install Velero without a default backup storage location, you must set the noDefaultBackupLocation flag in order to confirm installation. podConfig * PodConfig Defines the configuration of the Velero pod. logLevel string Velero server's log level (use debug for the most granular logging, leave unset for Velero default). Valid options are trace , debug , info , warning , error , fatal , and panic . Complete schema definitions for the type VeleroConfig . Table 4.11. CustomPlugin Property Type Description name string Name of custom plugin. image string Image of custom plugin. Complete schema definitions for the type CustomPlugin . Table 4.12. ResticConfig Property Type Description enable * bool If set to true , enables backup and restore using Restic. If set to false , snapshots are needed. supplementalGroups [] int64 Defines the Linux groups to be applied to the Restic pod. timeout string A user-supplied duration string that defines the Restic timeout. Default value is 1hr (1 hour). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . podConfig * PodConfig Defines the configuration of the Restic pod. Complete schema definitions for the type ResticConfig . Table 4.13. PodConfig Property Type Description nodeSelector map [ string ] string Defines the nodeSelector to be supplied to a Velero podSpec or a Restic podSpec . For more details, see Configuring node agents and node labels . tolerations [] Toleration Defines the list of tolerations to be applied to a Velero deployment or a Restic daemonset . resourceAllocations ResourceRequirements Set specific resource limits and requests for a Velero pod or a Restic pod as described in Setting Velero CPU and memory resource allocations . labels map [ string ] string Labels to add to pods. 4.15.2.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" Complete schema definitions for the type PodConfig . Table 4.14. Features Property Type Description dataMover * DataMover Defines the configuration of the Data Mover. Complete schema definitions for the type Features . Table 4.15. DataMover Property Type Description enable bool If set to true , deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to false , these are not deployed. credentialName string User-supplied Restic Secret name for Data Mover. timeout string A user-supplied duration string for VolumeSnapshotBackup and VolumeSnapshotRestore to complete. Default is 10m (10 minutes). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . The OADP API is more fully detailed in OADP Operator . 4.16. Advanced OADP features and functionalities This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP). 4.16.1. Working with different Kubernetes API versions on the same cluster 4.16.1.1. Listing the Kubernetes API group versions on a cluster A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups. If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API. To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1 . Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster. Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources. Procedure Enter the following command: USD oc api-resources 4.16.1.2. About Enable API Group Versions By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions , that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API. Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example , which is example.com/v1 . With the feature enabled, Velero also backs up example.com/v1beta2 . When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions. Note Enable API Group Versions is still in beta. Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority: Preferred version of the destination cluster Preferred version of the source_ cluster Common non-preferred supported version with the highest Kubernetes version priority Additional resources Enable API Group Versions Feature 4.16.1.3. Using Enable API Group Versions You can use Velero's Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one. Note Enable API Group Versions is still in beta. Procedure Configure the EnableAPIGroupVersions feature flag: apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication ... spec: configuration: velero: featureFlags: - EnableAPIGroupVersions Additional resources Enable API Group Versions Feature 4.16.2. Backing up data from one cluster and restoring it to another cluster 4.16.2.1. About backing up data from one cluster and restoring it on another cluster OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster. You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster. To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster: Operators Use of Velero UID and GID ranges 4.16.2.1.1. Operators You must exclude Operators from the backup of an application for backup and restore to succeed. 4.16.2.1.2. Use of Velero Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots. Note In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic . In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup . You must also use Velero's File System Backup to migrate data between AWS regions or between Microsoft Azure regions. Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster. It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources. 4.16.2.2. About determining which pod volumes to back up Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes. Velero supports two approaches for determining pod volumes. Use the opt-in or the opt-out approach to allow Velero to decide between an FSB, a volume snapshot, or a Data Mover backup. Opt-in approach : With the opt-in approach, volumes are backed up using snapshot or Data Mover by default. FSB is used on specific volumes that are opted-in by annotations. Opt-out approach : With the opt-out approach, volumes are backed up using FSB by default. Snapshots or Data Mover is used on specific volumes that are opted-out by annotations. 4.16.2.2.1. Limitations FSB does not support backing up and restoring hostpath volumes. However, FSB does support backing up and restoring local volumes. Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data . It is essential that you limit access to backup storage. For PVCs, every incremental backup chain is maintained across pod reschedules. For pod volumes that are not PVCs, such as emptyDir volumes, if a pod is deleted or recreated, for example, by a ReplicaSet or a deployment, the backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod. Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up. FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup.. FSB expects volumes to be mounted under <hostPath>/<pod UID> , with <hostPath> being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the <pod UID> subdirectory, and VFSB does not work with them as expected. 4.16.2.2.2. Backing up pod volumes by using the opt-in method You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command. Procedure On each pod that contains one or more volumes that you want to back up, enter the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. 4.16.2.2.3. Backing up pod volumes by using the opt-out method When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions: Volumes that mount the default service account token, secrets, and configuration maps. hostPath volumes You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command. Procedure On each pod that contains one or more volumes that you do not want to back up, run the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. Note You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag. 4.16.2.3. UID and GID ranges If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations: Summary of the issues The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift's UID and GID ranges, see A Guide to OpenShift and UIDs . Detailed description of the issues When you create a namespace in OpenShift Container Platform by using the shell command oc create namespace , OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in the metadata.annotations field of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true: There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore. A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace. This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a securityContext UID to a pod based on namespace annotations that have changed since the persistent volume data was backed up. The UID of the container no longer matches the UID of the file owner. An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster. Mitigations You can use one or more of the following mitigations to resolve the UID and GID range issues: Simple mitigations: If you use a label selector in the Backup CR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name. Advanced mitigations: Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration . Step 1 is optional. For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs . 4.16.2.4. Backing up data from one cluster and restoring it to another cluster In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another. Prerequisites All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide. Procedure Make the following additions to the procedures given for your platform: Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster. Share the same object storage location credentials across the clusters. For best results, use OADP to create the namespace on the destination cluster. If you use the Velero file-system-backup option, enable the --default-volumes-to-fs-backup flag for use during backup by running the following command: USD velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options> Note In OADP 1.2 and later, the Velero Restic option is called file-system-backup . Important Before restoring a CSI back up, edit the VolumeSnapshotClass custom resource (CR), and set the snapshot.storage.kubernetes.io/is-default-class parameter to false. Otherwise, the restore will partially fail due to the same value in the VolumeSnapshotClass in the target cluster for the same drive. 4.16.3. OADP storage class mapping 4.16.3.1. Storage class mapping Storage class mapping allows you to define rules or policies specifying which storage class should be applied to different types of data. This feature automates the process of determining storage classes based on access frequency, data importance, and cost considerations. It optimizes storage efficiency and cost-effectiveness by ensuring that data is stored in the most suitable storage class for its characteristics and usage patterns. You can use the change-storage-class-config field to change the storage class of your data objects, which lets you optimize costs and performance by moving data between different storage tiers, such as from standard to archival storage, based on your needs and access patterns. 4.16.3.1.1. Storage class mapping with Migration Toolkit for Containers You can use the Migration Toolkit for Containers (MTC) to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster and for storage class mapping and conversion. You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the MTC web console. 4.16.3.1.2. Mapping storage classes with OADP You can use OpenShift API for Data Protection (OADP) with the Velero plugin v1.1.0 and later to change the storage class of a persistent volume (PV) during restores, by configuring a storage class mapping in the config map in the Velero namespace. To deploy ConfigMap with OADP, use the change-storage-class-config field. You must change the storage class mapping based on your cloud provider. Procedure Change the storage class mapping by running the following command: USD cat change-storageclass.yaml Create a config map in the Velero namespace as shown in the following example: Example apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: "" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi Save your storage class mapping preferences by running the following command: USD oc create -f change-storage-class-config 4.16.4. Additional resources Working with different Kubernetes API versions on the same cluster . Backing up applications with File System Backup: Kopia or Restic . Migration converting storage classes . | [
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"spec: configuration: nodeAgent: enable: true uploaderType: restic",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\":2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/backup_and_restore/oadp-application-backup-and-restore |
Chapter 34. federation | Chapter 34. federation This chapter describes the commands under the federation command. 34.1. federation domain list List accessible domains Usage: Table 34.1. Command arguments Value Summary -h, --help Show this help message and exit Table 34.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.2. federation project list List accessible projects Usage: Table 34.6. Command arguments Value Summary -h, --help Show this help message and exit Table 34.7. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.8. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.3. federation protocol create Create new federation protocol Usage: Table 34.11. Positional arguments Value Summary <name> New federation protocol name (must be unique per identity provider) Table 34.12. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that will support the new federation protocol (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) (required) Table 34.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 34.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.4. federation protocol delete Delete federation protocol(s) Usage: Table 34.17. Positional arguments Value Summary <federation-protocol> Federation protocol(s) to delete (name or id) Table 34.18. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) 34.5. federation protocol list List federation protocols Usage: Table 34.19. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider to list (name or id) (required) Table 34.20. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 34.21. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.6. federation protocol set Set federation protocol properties Usage: Table 34.24. Positional arguments Value Summary <name> Federation protocol to modify (name or id) Table 34.25. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) 34.7. federation protocol show Display federation protocol details Usage: Table 34.26. Positional arguments Value Summary <federation-protocol> Federation protocol to display (name or id) Table 34.27. Command arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) Table 34.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 34.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 34.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack federation domain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack federation project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack federation protocol create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> --mapping <mapping> <name>",
"openstack federation protocol delete [-h] --identity-provider <identity-provider> <federation-protocol> [<federation-protocol> ...]",
"openstack federation protocol list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] --identity-provider <identity-provider>",
"openstack federation protocol set [-h] --identity-provider <identity-provider> [--mapping <mapping>] <name>",
"openstack federation protocol show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> <federation-protocol>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/federation |
Chapter 5. Using Red Hat Quay | Chapter 5. Using Red Hat Quay The following steps show you how to use the interface to create new organizations and repositories, and to search and browse existing repositories. Following step 3, you can use the command line interface to interact with the registry and to push and pull images. Procedure Use your browser to access the user interface for the Red Hat Quay registry at http://quay-server.example.com , assuming you have configured quay-server.example.com as your hostname in your /etc/hosts file and in your config.yaml file. Click Create Account and add a user, for example, quayadmin with a password password . From the command line, log in to the registry: USD sudo podman login --tls-verify=false quay-server.example.com Example output Username: quayadmin Password: password Login Succeeded! 5.1. Pushing and pulling images on Red Hat Quay Use the following procedure to push and pull images to your Red Hat Quay registry. Procedure To test pushing and pulling images from the Red Hat Quay registry, first pull a sample image from an external registry: USD sudo podman pull busybox Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Enter the following command to see the local copy of the image: USD sudo podman images Example output REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/busybox latest 22667f53682a 14 hours ago 1.45 MB Enter the following command to tag this image, which prepares the image for pushing it to the registry: USD sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to your registry. Following this step, you can use your browser to see the tagged image in your repository. USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures To test access to the image from the command line, first delete the local copy of the image: USD sudo podman rmi quay-server.example.com/quayadmin/busybox:test Example output Untagged: quay-server.example.com/quayadmin/busybox:test Pull the image again, this time from your Red Hat Quay registry: USD sudo podman pull --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Trying to pull quay-server.example.com/quayadmin/busybox:test... Getting image source signatures Copying blob 6ef22a7134ba [--------------------------------------] 0.0b / 0.0b Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 5.2. Accessing the superuser admin panel If you added a superuser to your config.yaml file, you can access the Superuser Admin Panel on the Red Hat Quay UI by using the following procedure. Prerequisites You have configured a superuser. Procedure Access the Superuser Admin Panel on the Red Hat Quay UI by clicking on the current user's name or avatar in the navigation pane of the UI. Then, click Superuser Admin Panel . On this page, you can manage users, your organization, service keys, view change logs, view usage logs, and create global messages for your organization. | [
"sudo podman login --tls-verify=false quay-server.example.com",
"Username: quayadmin Password: password Login Succeeded!",
"sudo podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"sudo podman images",
"REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/busybox latest 22667f53682a 14 hours ago 1.45 MB",
"sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures",
"sudo podman rmi quay-server.example.com/quayadmin/busybox:test",
"Untagged: quay-server.example.com/quayadmin/busybox:test",
"sudo podman pull --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Trying to pull quay-server.example.com/quayadmin/busybox:test Getting image source signatures Copying blob 6ef22a7134ba [--------------------------------------] 0.0b / 0.0b Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/proof_of_concept_-_deploying_red_hat_quay/use-quay-poc |
Troubleshooting OpenShift Data Foundation | Troubleshooting OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions on troubleshooting OpenShift Data Foundation Red Hat Storage Documentation Team Abstract Read this document for instructions on troubleshooting Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 4, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 7, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message Note If no < -arg> is included, must-gather will collect all logs. Chapter 3. Using odf-cli command odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal . 3.1. Subcommands of odf get command odf get recovery-profile Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the odf set recovery-profile command. After the value is set, the appropriate value is displayed. Example : odf get health Checks the health of the Ceph cluster and common configuration issues. This command checks for the following: At least three mon pods are running on different nodes Mon quorum and Ceph health details At least three OSD pods are running on different nodes The 'Running' status of all pods Placement group status At least one MGR pod is running Example : odf get dr-health In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The cephblockpool is queried with mirroring-enabled and If not found will exit with relevant logs. Example : odf get dr-prereq Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown. Example 3.2. Subcommands of odf operator command odf operator rook set Sets the provided property value in the rook-ceph-operator config configmap Example : where, ROOK_LOG_LEVEL can be DEBUG , INFO , or WARNING odf operator rook restart Restarts the Rook-Ceph operator Example : odf restore mon-quorum Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again. Example : odf restore deleted <crd> Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime. Note A warning message seeking confirmation to restore appears. After confirming, you need to enter continue to start the operator and expand to the full mon-quorum again. Example: 3.3. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example, Chapter 4. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 4.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets. Chapter 6. Encryption token is deleted or expired Use this procedure to update the token if the encryption token for your key management system gets deleted or expires. Prerequisites Ensure that you have a new token with the same policy as the deleted or expired token Procedure Log in to OpenShift Container Platform Web Console. Click Workloads -> Secrets To update the ocs-kms-token used for cluster wide encryption: Set the Project to openshift-storage . Click ocs-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . To update the ceph-csi-kms-token for a given project or namespace with encrypted persistent volumes: Select the required Project . Click ceph-csi-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe -> Alerting -> Firing option Home -> Overview -> Cluster tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Block and File tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. This impacts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads -> Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either do a vertical or a horizontal scaling of CPU. For more information, see the Description and Runbook section of the alert. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you can use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster. Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power Chapter 9. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 9.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 9.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated with the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active. Chapter 10. Troubleshooting and deleting remaining resources during Uninstall Occasionally some of the custom resources managed by an operator may remain in "Terminating" status waiting on the finalizer to complete, although you have performed all the required cleanup tasks. In such an event you need to force the removal of such resources. If you do not do so, the resources remain in the Terminating state even after you have performed all the uninstall steps. Check if the openshift-storage namespace is stuck in the Terminating state upon deletion. Output: Check for the NamespaceFinalizersRemaining and NamespaceContentRemaining messages in the STATUS section of the command output and perform the step for each of the listed resources. Example output : Delete all the remaining resources listed in the step. For each of the resources to be deleted, do the following: Get the object kind of the resource which needs to be removed. See the message in the above output. Example : message: Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io Here cephobjectstoreuser.ceph.rook.io is the object kind. Get the Object name corresponding to the object kind. Example : Example output: Patch the resources. Example: Output: Verify that the openshift-storage project is deleted. Output: If the issue persists, reach out to Red Hat Support . Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output : Chapter 12. Restoring the monitor pods in OpenShift Data Foundation Restore the monitor pods if all three of them go down, and when OpenShift Data Foundation is not able to recover the monitor pods automatically. Note This is a disaster recovery procedure and must be performed under the guidance of the Red Hat support team. Contact Red Hat support team on, Red Hat support . Procedure Scale down the rook-ceph-operator and ocs operator deployments. Create a backup of all deployments in openshift-storage namespace. Patch the Object Storage Device (OSD) deployments to remove the livenessProbe parameter, and run it with the command parameter as sleep . Copy tar to the OSDs. Note While copying the tar binary to the OSD, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Retrieve the monstore cluster map from all the OSDs. Create the recover_mon.sh script. Run the recover_mon.sh script. Patch the MON deployments, and run it with the command parameter as sleep . Edit the MON deployments. Patch the MON deployments to increase the initialDelaySeconds . Copy tar to the MON pods. Note While copying the tar binary to the MON, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Copy the previously retrieved monstore to the mon-a pod. Navigate into the MON pod and change the ownership of the retrieved monstore . Copy the keyring template file before rebuilding the mon db . Populate the keyring of all other Ceph daemons (OSD, MGR, MDS and RGW) from their respective secrets. When getting the daemons keyring, use the following command: Get the OSDs keys with the following script: Copy the mon keyring locally, then edit it by adding all daemon keys captured in the earlier step and copy it back to one of the MON pods (mon-a): As an example, the keyring file should look like the following: Note If the caps entries are not present in the OSDs keys output, make sure to add caps to all the OSDs output as mentioned in the keyring file example. Navigate into the mon-a pod, and verify that the monstore has a monmap . Navigate into the mon-a pod. Verify that the monstore has a monmap . Optional: If the monmap is missing then create a new monmap . <mon-a-id> Is the ID of the mon-a pod. <mon-a-ip> Is the IP address of the mon-a pod. <mon-b-id> Is the ID of the mon-b pod. <mon-b-ip> Is the IP address of the mon-b pod. <mon-c-id> Is the ID of the mon-c pod. <mon-c-ip> Is the IP address of the mon-c pod. <fsid> Is the file system ID. Verify the monmap . Import the monmap . Important Use the previously created keyring file. Create a backup of the old store.db file. Copy the rebuild store.db file to the monstore directory. After rebuilding the monstore directory, copy the store.db file from local to the rest of the MON pods. <id> Is the ID of the MON pod Navigate into the rest of the MON pods and change the ownership of the copied monstore . <id> Is the ID of the MON pod Revert the patched changes. For MON deployments: <mon-deployment.yaml> Is the MON deployment yaml file For OSD deployments: <osd-deployment.yaml> Is the OSD deployment yaml file For MGR deployments: <mgr-deployment.yaml> Is the MGR deployment yaml file Important Ensure that the MON, MGR and OSD pods are up and running. Scale up the rook-ceph-operator and ocs-operator deployments. Verification steps Check the Ceph status to confirm that CephFS is running. Example output: Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in Ready state. Important If the MCG is not in the active state, and the backingstore and bucketclass not in the Ready state, you need to restart all the MCG related pods. For more information, see Section 12.1, "Restoring the Multicloud Object Gateway" . 12.1. Restoring the Multicloud Object Gateway If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running. Procedure Restart all the pods related to the MCG. <noobaa-operator> Is the name of the MCG operator <noobaa-core> Is the name of the MCG core pod <noobaa-endpoint> Is the name of the MCG endpoint <noobaa-db> Is the name of the MCG db pod If the RADOS Object Gateway (RGW) is configured, restart the pod. <rgw-pod> Is the name of the RGW pod Note In OpenShift Container Platform 4.11, after the recovery, RBD PVC fails to get mounted on the application pods. Hence, you need to restart the node that is hosting the application pods. To get the node name that is hosting the application pod, run the following command: Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count. Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. Chapter 15. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 15.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 15.2, "Tuning the resources for the MCG" . 15.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 15.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . Chapter 16. Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation When you deploy OpenShift Data Foundation, public IPs are created even when OpenShift is installed as a private cluster. However, you can disable the Multicloud Object Gateway (MCG) load balancer usage by using the disableLoadBalancerService variable in the storagecluster CRD. This restricts MCG from creating any public resources for private clusters and helps to disable the NooBaa service EXTERNAL-IP . Procedure Run the following command and add the disableLoadBalancerService variable in the storagecluster YAML to set the service to ClusterIP: Note To undo the changes and set the service to LoadBalancer, set the disableLoadBalancerService variable to false or remove that line completely. Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: Chapter 18. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage -> Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm . | [
"oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>",
"odf get recovery-profile high_recovery_ops",
"odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]",
"odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found \"ocs-storagecluster-cephblockpool\" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]",
"odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.",
"odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patched",
"odf operator rook restart deployment.apps/rook-ceph-operator restarted",
"odf restore mon-quorum c",
"odf restore deleted cephclusters Info: Detecting which resources to restore for crd \"cephclusters\" Info: Restoring CR my-cluster Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no [...]",
"odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>",
"odf set ceph log-level osd crush 20",
"odf set ceph log-level mds crush 20",
"odf set ceph log-level mon crush 20",
"oc logs <pod-name> -n <namespace>",
"oc logs rook-ceph-operator-<ID> -n openshift-storage",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc cluster-info dump -n openshift-storage --output-directory=<directory-name>",
"oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>",
"oc logs <ocs-operator> -n openshift-storage",
"oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'",
"oc get events --sort-by=metadata.creationTimestamp -n openshift-storage",
"oc get csv -n openshift-storage",
"NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.15.0 NooBaa Operator 4.15.0 Succeeded ocs-operator.v4.15.0 OpenShift Container Storage 4.15.0 Succeeded odf-csi-addons-operator.v4.15.0 CSI Addons 4.15.0 Succeeded odf-operator.v4.15.0 OpenShift Data Foundation 4.15.0 Succeeded",
"oc get subs -n openshift-storage",
"NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.15-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.15 ocs-operator-stable-4.15-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.15 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.15 odf-operator odf-operator redhat-operators stable-4.15",
"oc get installplan -n openshift-storage",
"oc get pods -o wide | grep <component-name>",
"oc get pods -o wide | grep rook-ceph-operator",
"rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>",
"oc debug node/<node name>",
"chroot /host",
"crictl images | grep <component>",
"crictl images | grep rook-ceph",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage",
"du -a <path-in-the-mon-node> |sort -n -r |head -n10",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-osd",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'",
"patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'",
"oc project openshift-storage",
"get pod | grep rook-ceph-mds",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get pods | grep mgr",
"oc describe pods/ <pod_name>",
"oc get pods | grep mgr",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mgr",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep rook-ceph-mon",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"get pod | grep {ceph-component}",
"Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"-n openshift-storage get pods",
"-n openshift-storage get pods",
"-n openshift-storage get pods | grep osd",
"-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>",
"TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD",
"ceph status",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'",
"describe node <node_name>",
"oc project openshift-storage",
"oc get pod | grep rook-ceph",
"Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>",
"oc get pod/USD{MYPOD} -o wide",
"oc describe pod/USD{MYPOD}",
"oc logs pod/USD{MYPOD}",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15",
"ceph daemon osd.<id> ops",
"ceph daemon osd.<id> dump_historic_ops",
"oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions",
"[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]",
"oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"ceph osd pool set-quota <pool> max_bytes <bytes>",
"ceph osd pool set-quota <pool> max_objects <objects>",
"oc delete pod <pod-name> --grace-period=0 --force",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG",
"oc edit configmap rook-ceph-operator-config",
"... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO",
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc",
"oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>",
"failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc delete deployment rook-ceph-osd-<osd-id>",
"oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc",
"oc get -n openshift-storage pvc <pvc-name>",
"oc get pv <pv-name-from-above-command> -oyaml | grep path",
"oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted",
"oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>",
"oc delete -n openshift-storage pvc <pvc-name-from-step-a>",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock/",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock",
"rm /mnt/local-storage/localblock/<failed-device-name>",
"oc delete pv <pv-name>",
"#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc get project -n <namespace>",
"NAME DISPLAY NAME STATUS openshift-storage Terminating",
"oc get project openshift-storage -o yaml",
"status: conditions: - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All resources successfully discovered reason: ResourcesDiscovered status: \"False\" type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: \"False\" type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: \"False\" type: NamespaceDeletionContentFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some resources are remaining: cephobjectstoreusers.ceph.rook.io has 1 resource instances' reason: SomeResourcesRemain status: \"True\" type: NamespaceContentRemaining - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io in 1 resource instances' reason: SomeFinalizersRemain status: \"True\" type: NamespaceFinalizersRemaining",
"oc get <Object-kind> -n <project-name>",
"oc get cephobjectstoreusers.ceph.rook.io -n openshift-storage",
"NAME AGE noobaa-ceph-objectstore-user 26h",
"oc patch -n <project-name> <object-kind>/<object-name> --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"oc patch -n openshift-storage cephobjectstoreusers.ceph.rook.io/noobaa-ceph-objectstore-user --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user patched",
"oc get project openshift-storage",
"Error from server (NotFound): namespaces \"openshift-storage\" not found",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]",
"oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file",
"Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }",
"ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs",
"ceph osd pool application set <cephfs data pool name> cephfs data cephfs",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]",
"oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage",
"oc scale deployment ocs-operator --replicas=0 -n openshift-storage",
"mkdir backup",
"cd backup",
"oc project openshift-storage",
"for d in USD(oc get deployment|awk -F' ' '{print USD1}'|grep -v NAME); do echo USDd;oc get deployment USDd -o yaml > oc_get_deployment.USD{d}.yaml; done",
"for i in USD(oc get deployment -l app=rook-ceph-osd -oname);do oc patch USD{i} -n openshift-storage --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' ; oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"osd\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}' ; done",
"for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done",
"#!/bin/bash ms=/tmp/monstore rm -rf USDms mkdir USDms for osd_pod in USD(oc get po -l app=rook-ceph-osd -oname -n openshift-storage); do echo \"Starting with pod: USDosd_pod\" podname=USD(echo USDosd_pod|sed 's/pod\\///g') oc exec USDosd_pod -- rm -rf USDms oc exec USDosd_pod -- mkdir USDms oc cp USDms USDpodname:USDms rm -rf USDms mkdir USDms echo \"pod in loop: USDosd_pod ; done deleting local dirs\" oc exec USDosd_pod -- ceph-objectstore-tool --type bluestore --data-path /var/lib/ceph/osd/ceph-USD(oc get USDosd_pod -ojsonpath='{ .metadata.labels.ceph_daemon_id }') --op update-mon-db --no-mon-config --mon-store-path USDms echo \"Done with COT on pod: USDosd_pod\" oc cp USDpodname:USDms USDms echo \"Finished pulling COT data from pod: USDosd_pod\" done",
"chmod +x recover_mon.sh",
"./recover_mon.sh",
"for i in USD(oc get deployment -l app=rook-ceph-mon -oname);do oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'; done",
"for i in a b c ; do oc get deployment rook-ceph-mon-USD{i} -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 10000/g\" | oc replace -f - ; done",
"for i in `oc get pods -l app=rook-ceph-mon -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done",
"oc cp /tmp/monstore/ USD(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\\///g'):/tmp/",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"chown -R ceph:ceph /tmp/monstore",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"cp /etc/ceph/keyring-store/keyring /tmp/keyring",
"cat /tmp/keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"oc get secret rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-keyring -ojson | jq .data.keyring | xargs echo | base64 -d [mds.ocs-storagecluster-cephfilesystem-a] key = AQB3r8VgAtr6OhAAVhhXpNKqRTuEVdRoxG4uRA== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\"",
"for i in `oc get secret | grep keyring| awk '{print USD1}'` ; do oc get secret USD{i} -ojson | jq .data.keyring | xargs echo | base64 -d ; done",
"for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do oc exec -i USD{i} -- bash -c 'cat /var/lib/ceph/osd/ceph-*/keyring ' ;done",
"cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/etc/ceph/keyring-store/..data/keyring /tmp/keyring-mon-a",
"vi /tmp/keyring-mon-a",
"[mon.] key = AQCbQLRn0j9mKhAAJKWmMZ483QIpMwzx/yGSLw== caps mon = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-a] key = AQBFQbRnYuB9LxAA8i1fCSAKQQsPuywZ0Jlc5Q== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [mds.ocs-storagecluster-cephfilesystem-b] key = AQBHQbRnwHAOEBAAv+rBpYP5W8BmC7gLfLyk1w== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [osd.0] key = AQAvQbRnjF0eEhAA3H0l9zvKGZZM9Up6fJajhQ== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.1] key = AQA0QbRnq4cSGxAA7JpuK1+sq8gALNmMYFUMzw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.2] key = AQA3QbRn6JvcOBAAFKruZQhlQJKUOi9oxcN6fw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [client.admin] key = AQCbQLRnSzOuLBAAK1cSgr2eIyrZV8mV28UfvQ== caps mds = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" caps mgr = \"allow *\" [client.rgw.ocs.storagecluster.cephobjectstore.a] key = AQBTQbRny7NJLRAAPeTvK9kVg71/glbYLANGyw== caps mon = \"allow rw\" caps osd = \"allow rwx\" [mgr.a] key = AQD9QLRn8+xzDxAARqWQatoT9ruK76EpDS6iCw== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [mgr.b] key = AQD9QLRnltZOIhAAexshUqdOr3G79HWYXUDGFg== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [client.crash] key = AQD7QLRn6DDzCBAAEzhXRzGQUBUNTzC3nHntFQ== caps mon = \"allow profile crash\" caps mgr = \"allow rw\" [client.ceph-exporter] key = AQD7QLRntHzkGxAApQTkMVzcTiZn7jZbwK99SQ== caps mon = \"allow profile ceph-exporter\" caps mgr = \"allow r\" caps osd = \"allow r\" caps mds = \"allow r\"",
"cp /tmp/keyring-mon-a USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/tmp/keyring",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)",
"ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap",
"monmaptool /tmp/monmap --print",
"monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>",
"monmaptool /root/monmap --print",
"ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap",
"chown -R ceph:ceph /tmp/monstore",
"mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted",
"mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted",
"mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db",
"oc cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db",
"oc cp /tmp/store.db USD(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph- <id>",
"oc rsh USD(oc get po -l app=rook-ceph-mon,mon= <id> -oname)",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph- <id> /store.db",
"oc replace --force -f <mon-deployment.yaml>",
"oc replace --force -f <osd-deployment.yaml>",
"oc replace --force -f <mgr-deployment.yaml>",
"oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1",
"oc -n openshift-storage scale deployment ocs-operator --replicas=1",
"ceph -s",
"cluster: id: f111402f-84d1-4e06-9fdb-c27607676e55 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds 3 daemons have recently crashed services: mon: 3 daemons, quorum b,c,a (age 15m) mgr: a(active, since 14m) mds: ocs-storagecluster-cephfilesystem:0 osd: 3 osds: 3 up (since 15m), 3 in (since 2h) data: pools: 3 pools, 96 pgs objects: 500 objects, 1.1 GiB usage: 5.5 GiB used, 295 GiB / 300 GiB avail pgs: 96 active+clean",
"noobaa status -n openshift-storage",
"oc delete pods <noobaa-operator> -n openshift-storage",
"oc delete pods <noobaa-core> -n openshift-storage",
"oc delete pods <noobaa-endpoint> -n openshift-storage",
"oc delete pods <noobaa-db> -n openshift-storage",
"oc delete pods <rgw-pod> -n openshift-storage",
"oc get pods <application-pod> -n <namespace> -o yaml | grep nodeName nodeName: node_name",
"oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0",
"oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml",
"[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP",
"oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'",
"oc -n openshift-storage exec -it <mon-pod> bash",
"monmap_path=/tmp/monmap",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}",
"monmaptool --print /tmp/monmap",
"monmaptool USD{monmap_path} --rm <bad_mon>",
"monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c",
"ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}",
"oc -n openshift-storage edit configmap rook-ceph-mon-endpoints",
"data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789",
"data: b=10.100.13.242:6789",
"good_mon_id=b",
"mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'",
"oc replace --force -f rook-ceph-mon-b-deployment.yaml",
"oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>",
"oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1",
"oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'",
"oc edit storagecluster -n openshift-storage <storagecluster_name>",
"oc edit storagecluster -n openshift-storage ocs-storagecluster",
"spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi",
"oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '",
"oc edit storagecluster -n openshift-storage <storagecluster_name> [...] spec: arbiter: {} encryption: kms: {} externalStorage: {} managedResources: cephBlockPools: {} cephCluster: {} cephConfig: {} cephDashboard: {} cephFilesystems: {} cephNonResilientPools: {} cephObjectStoreUsers: {} cephObjectStores: {} cephRBDMirror: {} cephToolbox: {} mirroring: {} multiCloudGateway: disableLoadBalancerService: true <--------------- Add this endpoints: [...]",
"GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",
"oc adm pod-network make-projects-global openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/troubleshooting_openshift_data_foundation/commonly-required-logs_rhodf |
Chapter 2. Running Java applications with Shenandoah garbage collector | Chapter 2. Running Java applications with Shenandoah garbage collector You can run your Java application with the Shenandoah garbage collector (GC). Prerequisites Installed Red Hat build of OpenJDK. See Installing Red Hat build of OpenJDK 17 on Red Hat Enterprise Linux in the Installing and using Red Hat build of OpenJDK 17 on RHEL guide. Procedure Run your Java application with Shenandoah GC by using -XX:+UseShenandoahGC JVM option. USD java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC | [
"java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_17/running-application-with-shenandoah-gc |
Chapter 1. About the Fuse Console | Chapter 1. About the Fuse Console The Red Hat Fuse Console is a web console based on HawtIO open source software. For a list of supported browsers, go to Supported Configurations . The Fuse Console provides a central interface to examine and manage the details of one or more deployed Fuse containers. You can also monitor Red Hat Fuse and system resources, perform updates, and start or stop services. The Fuse Console is available when you install Red Hat Fuse standalone or use Fuse on OpenShift. The integrations that you can view and manage in the Fuse Console depend on the plugins that are running. Possible plugins include: Camel JMX OSGI Runtime Logs | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/fuse-console-overview-all_karaf |
Chapter 8. Enabling accelerators | Chapter 8. Enabling accelerators Before you can use an accelerator in OpenShift AI, you must install the relevant software components. The installation process varies based on the accelerator type. Prerequisites You have logged in to your OpenShift cluster. You have the cluster-admin role in your OpenShift cluster. You have installed an accelerator and confirmed that it is detected in your environment. Procedure Follow the appropriate documentation to enable your accelerator: NVIDIA GPUs : See Enabling NVIDIA GPUs . Intel Gaudi AI accelerators : See Enabling Intel Gaudi AI accelerators . AMD GPUs : See Enabling AMD GPUs . After installing your accelerator, create an accelerator profile as described in: Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: The Operator for your accelerator Node Feature Discovery (NFD) Kernel Module Management (KMM) The accelerator is correctly detected a few minutes after full installation of the Node Feature Discovery (NFD) and the relevant accelerator Operator. The OpenShift command line interface (CLI) displays the appropriate output for the GPU worker node. For example, here is output confirming that an NVIDIA GPU is detected: | [
"Expected output when the accelerator is detected correctly describe node <node name> Capacity: cpu: 4 ephemeral-storage: 313981932Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16076568Ki nvidia.com/gpu: 1 pods: 250 Allocatable: cpu: 3920m ephemeral-storage: 288292006229 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12828440Ki nvidia.com/gpu: 1 pods: 250"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed_in_a_disconnected_environment/enabling-accelerators_install |
6.3. Red Hat Virtualization 4.4 SP 1 Batch Update 1 (ovirt-4.5.1) | 6.3. Red Hat Virtualization 4.4 SP 1 Batch Update 1 (ovirt-4.5.1) 6.3.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1930643 A wait_after_lease option has been added to the ovirt_vm Ansible module to provide a delay so that the VM lease creation is completed before the action starts. BZ# 1958032 Previously, live storage migration could fail if the destination volume filled up before it was extended. In the current release, the initial size of the destination volume is larger and the extension is no longer required. BZ# 1994144 The email address for notifications is updated correctly on the "Manage Events" screen. BZ# 2001574 Previously, when closing the "Move/Copy disk" dialog in the Administration Portal, some of the acquired resources were not released, causing browser slowness and high memory usage in environments with many disks. In this release, the memory leak has been fixed. BZ# 2001923 Previously, when a failed VM snapshot was removed from the Manager database while the volume remained on the storage, subsequent operations failed because there was a discrepancy between the storage and the database. Now, the VM snapshot is retained if the volume is not removed from the storage. BZ# 2006625 Previously, memory allocated by hugepages was included in the host memory usage calculation, resulting in high memory usage in the Administration Portal, even with no running VMs, and false VDS_HIGH_MEM_USE warnings in the logs. In this release, hugepages are not included in the memory usage. VDS_HIGH_MEM_USE warnings are logged only when normal (not hugepages) memory usage is above a defined threshold. Memory usage in the Administration Portal is calculated from the normal and hugepages used memory, not from allocated memory. BZ# 2030293 A VM no longer remains in a permanent locked state if the Manager is rebooted while exporting the VM as OVA. BZ# 2048545 LVM command error messages have been improved so that it is easier to trace and debug errors. BZ# 2055905 The default migration timeout period has been increased to enable VMs with many direct LUN disks, which require more preparation time on the destination host, to be migrated. The migration_listener_prepare_disk_timeout and max_migration_listener_timeout VDSM options have been added so that the default migration timeout period can be extended if necessary. BZ# 2068270 Previously, when downloading snapshots, the disk_id was not set, which caused resumption of the transfer operation to fail because locking requires the disk_id to be set. In this release, the disk_id is always set so that the transfer operation recovers after restart. BZ# 2070045 The host no longer enters a non-responsive state if the OVF store update operation times out because of network errors. BZ# 2072626 The ovirt-engine-notifier correctly increments the SNMP EngineBoots value after restarts, which enables the ovirt-engine-notifier to work with the SNMPv3 authPriv security level. BZ# 2077008 The QEMU guest agent now reports the correct guest CPU count. BZ# 2081241 Previously, VMs with one or more VFIO devices, Q35 chipset, and maximum number of vCPUs >= 256 might fail to start because of a memory allocation error reported by the QEMU guest agent. This error has been fixed. BZ# 2081359 Infiniband interfaces are now reported by VDSM. BZ# 2081493 The size of preallocated volumes is unchanged after a cold merge. BZ# 2090331 The ovirt_vm Ansible module displays an error message if a non-existent snapshot is used to clone a VM. BZ# 2099650 A bug that caused the upgrade process to fail if the vdc_options table contained records with a NULL default value has been fixed. BZ# 2105296 Virtual machines with VNC created by earlier Manager versions sometimes failed to migrate to newer hosts because the VNC password was too long. This issue has been fixed. 6.3.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1663217 The hostname and/or FQDN of the VM or VDSM host can change after a virtual machine (VM) is created. Previously, this change could prevent the VM from fetching errata from Red Hat Satellite/Foreman. With this enhancement, errata can be fetched even if the VM hostname or FQDN changes. BZ# 1782077 An "isolated threads" CPU pinning policy has been added. This policy pins a physical core exclusively to a virtual CPU, enabling a complete physical core to be used as the virtual core of a single virtual machine. BZ# 1881280 The hosted-engine --deploy --restore-from-file prompts now include guidance to clarify the options and to ensure correct input. BZ# 1937408 The following key-value pairs have been added to the KVM dictionary in the ovirt_template module for importing a template from OVA: URL, for example, qemu:///system storage_domain for converted disks host from which the template is imported clone to regenerate imported template's identifiers BZ# 1976607 VGA has replaced QXL as the default video device for virtual machines. You can switch from QXL to VGA using the API by removing the graphic and video devices from the VM (creating a headless VM) and then adding a VNC graphic device. BZ# 1996098 The copy_paste_enabled and file_transfer_enabled options have been added to the ovirt_vm Ansible module. BZ# 1999167 Spice console remote-viewer now allows the Change CD command to work with data domains if no ISO domains exist. If there are multiple data domains, remote-viewer selects the first data domain on the list of available domains. BZ# 2081559 The rhv-log-collector-analyzer discrepancy tool now detects preallocated QCOW2 images that have been reduced. BZ# 2092885 The Welcome page of the Administration Portal now displays both the upstream and downstream version names. 6.3.3. Rebase: Bug Fixes Only These items are rebases of bug fixes included in this release of Red Hat Virtualization: BZ# 2093795 Rebase package(s) to version: 4.4.6 This fixes an issue which prevented the collection of PostgreSQL data and the documentation of the --log-size option. 6.3.4. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1703153 There is a workaround for creating a RHV Manager hostname that is longer than 95 characters. Create a short FQDN, up to 63 characters, for the engine-setup tool. Create a custom certificate and put the short FQDN and a long FQDN (final hostname) into the certificate's Subject Alternate Name field. Configure the Manager to use the custom certificate. Create an /etc/ovirt-engine/engine.conf.d/99-alternate-engine-fqdns.conf file with the following content: SSO_ALTERNATE_ENGINE_FQDNS="long FQDN" Restart the ovirt-engine service. If you cannot access the Manager and are using a very long FQDN: 1. Check for the following error message in /var/log/httpd/error_log : ajp_msg_check_header() incoming message is too big NNNN, max is MMMM 2. Add the following line to /etc/httpd/conf.d/z-ovirt-engine-proxy.conf : ProxyIOBufferSize PPPP where PPPP is greater than NNNN in the error message. Restart Apache. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_sp_1_batch_update_1_ovirt_4_5_1 |
Chapter 2. Introduction | Chapter 2. Introduction Security-Enhanced Linux (SELinux) is an implementation of a mandatory access control mechanism in the Linux kernel, checking for allowed operations after standard discretionary access controls are checked. SELinux can enforce rules on files and processes in a Linux system, and on their actions, based on defined policies. When using SELinux, files, including directories and devices, are referred to as objects. Processes, such as a user running a command or the Mozilla Firefox application, are referred to as subjects. Most operating systems use a Discretionary Access Control (DAC) system that controls how subjects interact with objects, and how subjects interact with each other. On operating systems using DAC, users control the permissions of files (objects) that they own. For example, on Linux operating systems, users could make their home directories world-readable, giving users and processes (subjects) access to potentially sensitive information, with no further protection over this unwanted action. Relying on DAC mechanisms alone is fundamentally inadequate for strong system security. DAC access decisions are only based on user identity and ownership, ignoring other security-relevant information such as the role of the user, the function and trustworthiness of the program, and the sensitivity and integrity of the data. Each user typically has complete discretion over their files, making it difficult to enforce a system-wide security policy. Furthermore, every program run by a user inherits all of the permissions granted to the user and is free to change access to the user's files, so minimal protection is provided against malicious software. Many system services and privileged programs run with coarse-grained privileges that far exceed their requirements, so that a flaw in any one of these programs could be exploited to obtain further system access. [1] The following is an example of permissions used on Linux operating systems that do not run Security-Enhanced Linux (SELinux). The permissions and output in these examples may differ slightly from your system. Use the ls -l command to view file permissions: In this example, the first three permission bits, rwx , control the access the Linux user1 user (in this case, the owner) has to file1 . The three permission bits, rw- , control the access the Linux group1 group has to file1 . The last three permission bits, r-- , control the access everyone else has to file1 , which includes all users and processes. Security-Enhanced Linux (SELinux) adds Mandatory Access Control (MAC) to the Linux kernel, and is enabled by default in Red Hat Enterprise Linux. A general purpose MAC architecture needs the ability to enforce an administratively-set security policy over all processes and files in the system, basing decisions on labels containing a variety of security-relevant information. When properly implemented, it enables a system to adequately defend itself and offers critical support for application security by protecting against the tampering with, and bypassing of, secured applications. MAC provides strong separation of applications that permits the safe execution of untrustworthy applications. Its ability to limit the privileges associated with executing processes limits the scope of potential damage that can result from the exploitation of vulnerabilities in applications and system services. MAC enables information to be protected from legitimate users with limited authorization as well as from authorized users who have unwittingly executed malicious applications. [2] The following is an example of the labels containing security-relevant information that are used on processes, Linux users, and files, on Linux operating systems that run SELinux. This information is called the SELinux context , and is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. With DAC, access is controlled based only on Linux user and group IDs. It is important to remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note On Linux operating systems that run SELinux, there are Linux users as well as SELinux users. SELinux users are part of SELinux policy. Linux users are mapped to SELinux users. To avoid confusion, this guide uses " Linux user " and " SELinux user " to differentiate between the two. 2.1. Benefits of running SELinux All processes and files are labeled with a type. A type defines a domain for processes, and a type for files. Processes are separated from each other by running in their own domains, and SELinux policy rules define how processes interact with files, as well as how processes interact with each other. Access is only allowed if an SELinux policy rule exists that specifically allows it. Fine-grained access control. Stepping beyond traditional UNIX permissions that are controlled at user discretion and based on Linux user and group IDs, SELinux access decisions are based on all available information, such as an SELinux user, role, type, and, optionally, a level. SELinux policy is administratively-defined, enforced system-wide, and is not set at user discretion. Reduced vulnerability to privilege escalation attacks. One example: since processes run in domains, and are therefore separated from each other, and because SELinux policy rules define how processes access files and other processes, if a process is compromised, the attacker only has access to the normal functions of that process, and to files the process has been configured to have access to. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories, unless a specific SELinux policy rule was added or configured to allow such access. SELinux can be used to enforce data confidentiality and integrity, as well as protecting processes from untrusted inputs. However, SELinux is not: antivirus software, a replacement for passwords, firewalls, or other security systems, an all-in-one security solution. SELinux is designed to enhance existing security solutions, not replace them. Even when running SELinux, it is important to continue to follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords, firewalls, and so on. [1] "Integrating Flexible Support for Security Policies into the Linux Operating System", by Peter Loscocco and Stephen Smalley. This paper was originally prepared for the National Security Agency and is, consequently, in the public domain. Refer to the original paper for details and the document as it was first released. Any edits and changes were done by Murray McAllister. [2] "Meeting Critical Security Objectives with Security-Enhanced Linux", by Peter Loscocco and Stephen Smalley. This paper was originally prepared for the National Security Agency and is, consequently, in the public domain. Refer to the original paper for details and the document as it was first released. Any edits and changes were done by Murray McAllister. | [
"~]USD ls -l file1 -rwxrw-r-- 1 user1 group1 0 2009-08-30 11:03 file1",
"~]USD ls -Z file1 -rwxrw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-security-enhanced_linux-introduction |
12.2. Installing from a Different Source | 12.2. Installing from a Different Source You can install Red Hat Enterprise Linux from the ISO images stored on hard disk, or from a network using NFS, FTP, HTTP, or HTTPS methods. Experienced users frequently use one of these methods because it is often faster to read data from a hard disk or network server than from a DVD. The following table summarizes the different boot methods and recommended installation methods to use with each: Table 12.1. Boot Methods and Installation Sources Boot method Installation source Full installation media (DVD) The boot media itself Minimal boot media (CD or DVD) Full installation DVD ISO image or the installation tree extracted from this image, placed in a network location or on a hard drive Network boot Full installation DVD ISO image or the installation tree extracted from this image, placed in a network location | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installing-alternate-source-ppc |
Part IX. Decision engine in Red Hat Decision Manager | Part IX. Decision engine in Red Hat Decision Manager As a business rules developer, your understanding of the decision engine in Red Hat Decision Manager can help you design more effective business assets and a more scalable decision management architecture. The decision engine is the Red Hat Decision Manager component that stores, processes, and evaluates data to execute business rules and to reach the decisions that you define. This document describes basic concepts and functions of the decision engine to consider as you create your business rule system and decision services in Red Hat Decision Manager. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assembly-decision-engine |
Appendix B. Generating Keycloak host names automatically | Appendix B. Generating Keycloak host names automatically OpenShift routes has support for automatically generating host names by using a set pattern. This feature can integrate with Red Hat's build of Keycloak (RHBK) operator running on OpenShift. Prerequisites Red Hat OpenShift Container Platform version 4.13 or later. Installation of the RHBK operator. Access to the OpenShift web console with the cluster-admin role. A workstation with the oc binary installed. Procedure Enable the automatically generated route hostname feature. Under the .spec section, remove the entire hostname section, and replace it with the ingress section and className property within the Keycloak resource: Example spec: ... hostname: hostname: example.com ... Example spec: ... ingress: className: openshift-default ... Note To view all of the available Ingress classes, run the following command: Click the Save button. Verify the automatically generated hostname by clicking the Reload button to view the latest configuration: Example spec: ... hostname: hostname: example-keycloak-ingress-keycloak-system.apps.rhtas.example.com ... | [
"spec: hostname: hostname: example.com",
"spec: ingress: className: openshift-default",
"oc get ingressclass",
"spec: hostname: hostname: example-keycloak-ingress-keycloak-system.apps.rhtas.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/deployment_guide/generating-keycloak-host-names-automatically_deploy |
Chapter 97. Build schema reference | Chapter 97. Build schema reference Used in: KafkaConnectSpec Full list of Build schema properties Configures additional connectors for Kafka Connect deployments. 97.1. Configuring container registries To build new container images with additional connector plugins, Streams for Apache Kafka requires a container registry where the images can be pushed to, stored, and pulled from. Streams for Apache Kafka does not run its own container registry, so a registry must be provided. Streams for Apache Kafka supports private container registries as well as public registries such as Quay or Docker Hub . The container registry is configured in the .spec.build.output section of the KafkaConnect custom resource. The output configuration, which is required, supports two types: docker and imagestream . Using Docker registry To use a Docker registry, you have to specify the type as docker , and the image field with the full name of the new container image. The full name must include: The address of the registry Port number (if listening on a non-standard port) The tag of the new container image Example valid container image names: docker.io/my-org/my-image/my-tag quay.io/my-org/my-image/my-tag image-registry.image-registry.svc:5000/myproject/kafka-connect-build:latest Each Kafka Connect deployment must use a separate image, which can mean different tags at the most basic level. If the registry requires authentication, use the pushSecret to set a name of the Secret with the registry credentials. For the Secret, use the kubernetes.io/dockerconfigjson type and a .dockerconfigjson file to contain the Docker credentials. For more information on pulling an image from a private registry, see Create a Secret based on existing Docker credentials . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #... 1 (Required) Type of output used by Streams for Apache Kafka. 2 (Required) Full name of the image used, including the repository and tag. 3 (Optional) Name of the secret with the container registry credentials. Using OpenShift ImageStream Instead of Docker, you can use OpenShift ImageStream to store a new container image. The ImageStream has to be created manually before deploying Kafka Connect. To use ImageStream, set the type to imagestream , and use the image property to specify the name of the ImageStream and the tag used. For example, my-connect-image-stream:latest . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: imagestream 1 image: my-connect-build:latest 2 #... 1 (Required) Type of output used by Streams for Apache Kafka. 2 (Required) Name of the ImageStream and tag. 97.2. Configuring connector plugins Connector plugins are a set of files that define the implementation required to connect to certain types of external system. The connector plugins required for a container image must be configured using the .spec.build.plugins property of the KafkaConnect custom resource. Each connector plugin must have a name which is unique within the Kafka Connect deployment. Additionally, the plugin artifacts must be listed. These artifacts are downloaded by Streams for Apache Kafka, added to the new container image, and used in the Kafka Connect deployment. The connector plugin artifacts can also include additional components, such as (de)serializers. Each connector plugin is downloaded into a separate directory so that the different connectors and their dependencies are properly sandboxed . Each plugin must be configured with at least one artifact . Example plugins configuration with two connector plugins apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: 1 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #... 1 (Required) List of connector plugins and their artifacts. Streams for Apache Kafka supports the following types of artifacts: JAR files, which are downloaded and used directly TGZ archives, which are downloaded and unpacked ZIP archives, which are downloaded and unpacked Maven artifacts, which uses Maven coordinates Other artifacts, which are downloaded and used directly Important Streams for Apache Kafka does not perform any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually, and configure the checksum verification to make sure the same artifact is used in the automated build and in the Kafka Connect deployment. Using JAR artifacts JAR artifacts represent a JAR file that is downloaded and added to a container image. To use a JAR artifacts, set the type property to jar , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum of the artifact while building the new container image. Example JAR artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using TGZ artifacts TGZ artifacts are used to download TAR archives that have been compressed using Gzip compression. The TGZ artifact can contain the whole Kafka Connect connector, even when comprising multiple different files. The TGZ artifact is automatically downloaded and unpacked by Streams for Apache Kafka while building the new container image. To use TGZ artifacts, set the type property to tgz , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum before unpacking it and building the new container image. Example TGZ artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.tgz 2 sha512sum: 158...jg10 3 #... 1 (Required) Type of artifact. 2 (Required) URL from which the archive is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using ZIP artifacts ZIP artifacts are used to download ZIP compressed archives. Use ZIP artifacts in the same way as the TGZ artifacts described in the section. The only difference is you specify type: zip instead of type: tgz . Using Maven artifacts maven artifacts are used to specify connector plugin artifacts as Maven coordinates. The Maven coordinates identify plugin artifacts and dependencies so that they can be located and fetched from a Maven repository. Note The Maven repository must be accessible for the connector build process to add the artifacts to the container image. Example Maven artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: maven 1 repository: https://mvnrepository.com 2 group: <maven_group> 3 artifact: <maven_artifact> 4 version: <maven_version_number> 5 #... 1 (Required) Type of artifact. 2 (Optional) Maven repository to download the artifacts from. If you do not specify a repository, Maven Central repository is used by default. 3 (Required) Maven group ID. 4 (Required) Maven artifact type. 5 (Required) Maven version number. Using other artifacts other artifacts represent any kind of file that is downloaded and added to a container image. If you want to use a specific name for the artifact in the resulting container image, use the fileName field. If a file name is not specified, the file is named based on the URL hash. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, Streams for Apache Kafka will verify the checksum of the artifact while building the new container image. Example other artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. 4 (Optional) The name under which the file is stored in the resulting container image. 97.3. Build schema properties Property Property type Description output DockerOutput , ImageStreamOutput Configures where should the newly built image be stored. Required. plugins Plugin array List of connector plugins which should be added to the Kafka Connect. Required. resources ResourceRequirements CPU and memory resources to reserve for the build. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: imagestream 1 image: my-connect-build:latest 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: 1 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.tgz 2 sha512sum: 158...jg10 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: maven 1 repository: https://mvnrepository.com 2 group: <maven_group> 3 artifact: <maven_artifact> 4 version: <maven_version_number> 5 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-Build-reference |
Chapter 7. Notable changes to containers | Chapter 7. Notable changes to containers A set of container images is available for Red Hat Enterprise Linux (RHEL) 8.1. Notable changes include: Rootless containers are fully supported in RHEL 8.1. Rootless containers are containers that are created and managed by regular system users without administrative permissions. This allows users to maintain their identity, including such things as credentials to container registries. You can try rootless containers using the podman and buildah commands. For more information: for rootless containers, see Setting up rootless containers . for buildah , see Building container images with Buildah . for podman , see Building, running, and managing containers . The toolbox RPM package is fully supported in RHEL 8.1. The toolbox command is a utility often used with container-oriented operating systems, such as Red Hat CoreOS. With toolbox , you can troubleshoot and debug host operating systems by launching a container that includes a large set of troubleshooting tools for you to use, without having to install those tools on the host system. Running the toolbox command starts a rhel-tools container that provides root access to the host, for fixing or otherwise working with that host. See the new documentation on Running containers with runlabels . The podman package has been upgraded to upstream version 1.4.2. For information on features added to podman since version 1.0.0, which was used in RHEL 8.0, refer to descriptions of the latest podman releases on Github . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/notable_changes_to_containers |
Chapter 5. Developing Operators | Chapter 5. Developing Operators 5.1. About the Operator SDK The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators , in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run. Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication. The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Why use the Operator SDK? The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring. The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features: High-level APIs and abstractions to write the operational logic more intuitively Tools for scaffolding and code generation to quickly bootstrap a new project Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster Extensions to cover common Operator use cases Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.18 supports Operator SDK 1.38.0. 5.1.1. What are Operators? For an overview about basic Operator concepts and terminology, see Understanding Operators . 5.1.2. Development workflow The Operator SDK provides the following workflow to develop a new Operator: Create an Operator project by using the Operator SDK command-line interface (CLI). Define new resource APIs by adding custom resource definitions (CRDs). Specify resources to watch by using the Operator SDK API. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources. Use the Operator SDK CLI to build and generate the Operator deployment manifests. Figure 5.1. Operator SDK workflow At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application. 5.1.3. Additional resources Certified Operator Build Guide 5.2. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.18 supports Operator SDK 1.38.0. 5.2.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.18 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 5.2.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.18 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 5.3. Go-based Operators 5.3.1. Getting started with Operator SDK for Go-based Operators To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . 5.3.1.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.18+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.18 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.1.2. Creating and deploying Go-based Operators You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK. Procedure Create a project. Create your project directory: USD mkdir memcached-operator Change into the project directory: USD cd memcached-operator Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator The command uses the Go plugin by default. Create an API. Create a simple Memcached API: USD operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind Memcached Build and push the Operator image. Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to: USD make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag> Run the Operator. Install the CRD: USD make install Deploy the project to the cluster. Set IMG to the image that you pushed: USD make deploy IMG=<registry>/<user>/<image_name>:<tag> Create a sample custom resource (CR). Create a sample CR: USD oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system Watch for the CR to reconcile the Operator: USD oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system Delete a CR. Delete a CR by running the following command: USD oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system Clean up. Run the following command to clean up the resources that have been created as part of this procedure: USD make undeploy 5.3.1.3. steps See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator. 5.3.2. Operator SDK tutorial for Go-based Operators Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . This process is accomplished using two centerpieces of the Operator Framework: Operator SDK The operator-sdk CLI tool and controller-runtime library API Operator Lifecycle Manager (OLM) Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster Note This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators . 5.3.2.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.18+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.18 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.2.2. Creating a project Use the Operator SDK CLI to create a project called memcached-operator . Procedure Create a directory for the project: USD mkdir -p USDHOME/projects/memcached-operator Change to the directory: USD cd USDHOME/projects/memcached-operator Activate support for Go modules: USD export GO111MODULE=on Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator Note The operator-sdk init command uses the Go plugin by default. The operator-sdk init command generates a go.mod file to be used with Go modules . The --repo flag is required when creating a project outside of USDGOPATH/src/ , because generated files require a valid module path. 5.3.2.2.1. PROJECT file Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example: domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} 5.3.2.2.2. About the Manager The main program for the Operator is the main.go file, which initializes and runs the Manager . The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks. The Manager can restrict the namespace that all controllers watch for resources: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace}) By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""}) You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces: var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), }) 1 List of namespaces. 2 Creates a Cmd struct to provide shared dependencies and start components. 5.3.2.2.3. About multi-group APIs Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command: USD operator-sdk edit --multigroup=true This command updates the PROJECT file, which should look like the following example: domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ... For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly. Additional resource For more details on migrating to a multi-group project, see the Kubebuilder documentation . 5.3.2.3. Creating an API and controller Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. Procedure Run the following command to create an API with group cache , version, v1 , and kind Memcached : USD operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached When prompted, enter y for creating both the resource and controller: Create Resource [y/n] y Create Controller [y/n] y Example output Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ... This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go . 5.3.2.3.1. Defining the API Define the API for the Memcached custom resource (CR). Procedure Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status : // MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` } Update the generated code for the resource type: USD make generate Tip After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type. The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement. 5.3.2.3.2. Generating CRD manifests After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests. Procedure Run the following command to generate and update CRD manifests: USD make manifests This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file. 5.3.2.3.2.1. About OpenAPI validation OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated. Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix. Additional resources For more details on the usage of markers in API code, see the following Kubebuilder documentation: CRD generation Markers List of OpenAPIv3 validation markers For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation . 5.3.2.4. Implementing the controller After creating a new API and controller, you can implement the controller logic. Procedure For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation: Example 5.1. Example memcached_controller.go /* | [
"tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"go 1.22.0 github.com/onsi/ginkgo/v2 v2.17.1 github.com/onsi/gomega v1.32.0 k8s.io/api v0.30.1 k8s.io/apimachinery v0.30.1 k8s.io/client-go v0.30.1 sigs.k8s.io/controller-runtime v0.18.4",
"go mod tidy",
"- ENVTEST_K8S_VERSION = 1.29.0 + ENVTEST_K8S_VERSION = 1.30.0",
"- KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) - ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) - GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + KUSTOMIZE ?= USD(LOCALBIN)/kustomize + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen + ENVTEST ?= USD(LOCALBIN)/setup-envtest + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint",
"- KUSTOMIZE_VERSION ?= v5.3.0 - CONTROLLER_TOOLS_VERSION ?= v0.14.0 - ENVTEST_VERSION ?= release-0.17 - GOLANGCI_LINT_VERSION ?= v1.57.2 + KUSTOMIZE_VERSION ?= v5.4.2 + CONTROLLER_TOOLS_VERSION ?= v0.15.0 + ENVTEST_VERSION ?= release-0.18 + GOLANGCI_LINT_VERSION ?= v1.59.1",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))",
"- @[ -f USD(1) ] || { + @[ -f \"USD(1)-USD(3)\" ] || { echo \"Downloading USDUSD{package}\" ; + rm -f USD(1) || true ; - mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; - } + mv USD(1) USD(1)-USD(3) ; + } ; + ln -sf USD(1)-USD(3) USD(1)",
"- exportloopref + - ginkgolinter - prealloc + - revive + + linters-settings: + revive: + rules: + - name: comment-spacings",
"- FROM golang:1.21 AS builder + FROM golang:1.22 AS builder",
"\"sigs.k8s.io/controller-runtime/pkg/log/zap\" + \"sigs.k8s.io/controller-runtime/pkg/metrics/filters\" var enableHTTP2 bool - flag.StringVar(&metricsAddr, \"metrics-bind-address\", \":8080\", \"The address the metric endpoint binds to.\") + var tlsOpts []func(*tls.Config) + flag.StringVar(&metricsAddr, \"metrics-bind-address\", \"0\", \"The address the metrics endpoint binds to. \"+ + \"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.\") flag.StringVar(&probeAddr, \"health-probe-bind-address\", \":8081\", \"The address the probe endpoint binds to.\") flag.BoolVar(&enableLeaderElection, \"leader-elect\", false, \"Enable leader election for controller manager. \"+ \"Enabling this will ensure there is only one active controller manager.\") - flag.BoolVar(&secureMetrics, \"metrics-secure\", false, - \"If set the metrics endpoint is served securely\") + flag.BoolVar(&secureMetrics, \"metrics-secure\", true, + \"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.\") - tlsOpts := []func(*tls.Config){} + // Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server. + // More info: + // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/server + // - https://book.kubebuilder.io/reference/metrics.html + metricsServerOptions := metricsserver.Options{ + BindAddress: metricsAddr, + SecureServing: secureMetrics, + // TODO(user): TLSOpts is used to allow configuring the TLS config used for the server. If certificates are + // not provided, self-signed certificates will be generated by default. This option is not recommended for + // production environments as self-signed certificates do not offer the same level of trust and security + // as certificates issued by a trusted Certificate Authority (CA). The primary risk is potentially allowing + // unauthorized access to sensitive metrics data. Consider replacing with CertDir, CertName, and KeyName + // to provide certificates, ensuring the server communicates using trusted and secure certificates. + TLSOpts: tlsOpts, + } + + if secureMetrics { + // FilterProvider is used to protect the metrics endpoint with authn/authz. + // These configurations ensure that only authorized users and service accounts + // can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info: + // https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/filters#WithAuthenticationAndAuthorization + metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization + } + mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ - Scheme: scheme, - Metrics: metricsserver.Options{ - BindAddress: metricsAddr, - SecureServing: secureMetrics, - TLSOpts: tlsOpts, - }, + Scheme: scheme, + Metrics: metricsServerOptions,",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-ansible-operator:v4.18",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:6789",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1",
"FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4.18",
"- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\",
"[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment",
"This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac",
"apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager",
"- --leader-elect + - --health-probe-bind-address=:8081",
"- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true",
"- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-azure: \"true\"",
"// Get ENV var clientID := os.Getenv(\"CLIENTID\") tenantID := os.Getenv(\"TENANTID\") subscriptionID := os.Getenv(\"SUBSCRIPTIONID\") azureFederatedTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.AzureProviderSpec.AzureClientID = clientID credReqTemplate.Spec.AzureProviderSpec.AzureTenantID = tenantID credReqTemplate.Spec.AzureProviderSpec.AzureRegion = \"centralus\" credReqTemplate.Spec.AzureProviderSpec.AzureSubscriptionID = subscriptionID credReqTemplate.CloudTokenPath = azureFederatedTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"volumeMounts: - name: bound-sa-token mountPath: /var/run/secrets/openshift/serviceaccount readOnly: true volumes: # This service account token can be used to provide identity outside the cluster. - name: bound-sa-token projected: sources: - serviceAccountToken: path: token audience: openshift",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-gcp: \"true\"",
"// Get ENV var audience := os.Getenv(\"AUDIENCE\") serviceAccountEmail := os.Getenv(\"SERVICE_ACCOUNT_EMAIL\") gcpIdentityTokenFile := \"/var/run/secrets/openshift/serviceaccount/token\"",
"// apply CredentialsRequest on install credReqTemplate.Spec.GCPProviderSpec.Audience = audience credReqTemplate.Spec.GCPProviderSpec.ServiceAccountEmail = serviceAccountEmail credReqTemplate.CloudTokenPath = gcpIdentityTokenFile c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"service_account_json := secret.StringData[\"service_account.json\"]",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle βββ tests βββ scorecard βββ config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.38.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.38.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle βββ manifests β βββ cache.my.domain_memcacheds.yaml β βββ memcached-operator.clusterserviceversion.yaml βββ metadata βββ annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 }, ] }",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ βββ etcd βββ 0.0.1 β βββ etcdcluster.crd.yaml β βββ etcdoperator.clusterserviceversion.yaml βββ 0.0.2 β βββ etcdbackup.crd.yaml β βββ etcdcluster.crd.yaml β βββ etcdoperator.v0.0.2.clusterserviceversion.yaml β βββ etcdrestore.crd.yaml βββ etcd.package.yaml",
"bundle/ βββ bundle-0.0.1 β βββ bundle.Dockerfile β βββ manifests β β βββ etcdcluster.crd.yaml β β βββ etcdoperator.clusterserviceversion.yaml β βββ metadata β β βββ annotations.yaml β βββ tests β βββ scorecard β βββ config.yaml βββ bundle-0.0.2 βββ bundle.Dockerfile βββ manifests β βββ etcdbackup.crd.yaml β βββ etcdcluster.crd.yaml β βββ etcdoperator.v0.0.2.clusterserviceversion.yaml β βββ etcdrestore.crd.yaml βββ metadata β βββ annotations.yaml βββ tests βββ scorecard βββ config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operators/developing-operators |
23.3. Authenticating to an Identity Management Client with a Smart Card | 23.3. Authenticating to an Identity Management Client with a Smart Card As an Identity Management user with multiple role accounts in the Identity Management server, you can authenticate with your smart card to a desktop client system joined to the Identity Management domain. This enables you to use the client system as the selected role. For a basic overview of the supported options, see: Section 23.3.1, "Smart Card-based Authentication Options Supported on Identity Management Clients" For information on configuring the environment to enable the authentication, see: Section 23.3.2, "Preparing the Identity Management Client for Smart-card Authentication" For information on how to authenticate, see: Section 23.3.3, "Authenticating on an Identity Management Client with a Smart Card Using the Console Login" 23.3.1. Smart Card-based Authentication Options Supported on Identity Management Clients Users in Identity Management can use the following options when authenticating using a smart card on Identity Management clients. Local authentication Local authentication includes authentication using: the text console the graphical console, such as the Gnome Display Manager (GDM) local authentication services, such as su or sudo Remote authentication with ssh Certificates on a smart card are stored together with the PIN-protected SSH private key. Smart card-based authentication using other services, such as FTP, is not supported. 23.3.2. Preparing the Identity Management Client for Smart-card Authentication As the Identity Management administrator, perform these steps: On the server, create a shell script to configure the client. Use the ipa-advise config-client-for-smart-card-auth command, and save its output to a file: Open the script file, and review its contents. Add execute permissions to the file using the chmod utility: Copy the script to the client, and run it. Add the path to the PEM file with the certificate authority (CA) that signed the smart card certificate: Additionally, if an external certificate authority (CA) signed the certificate on the smart card, add the smart card CA as a trusted CA: On the Identity Management server, install the CA certificate: Repeat ipa-certupdate also on all replicas and clients. Restart the HTTP server: Repeat systemctl restart httpd also on all replicas. Note SSSD enables administrators to tune the certificate verification process with the certificate_verification parameter, for example if the Online Certificate Status Protocol (OCSP) servers defined in the certificate are not reachable from the client. For more information, see the sssd.conf (5) man page. 23.3.3. Authenticating on an Identity Management Client with a Smart Card Using the Console Login To authenticate as an Identity Management user, enter the user name and PIN. When logging in from the command line: When logging in using the Gnome Desktop Manager (GDM), GDM prompts you for the smart card PIN after you select the required user: Figure 23.13. Entering the smart card PIN in the Gnome Desktop Manager To authenticate as an Active Directory user, enter the user name in a format that uses the NetBIOS domain name: AD.EXAMPLE.COM\ad_user or [email protected] . If the authentication fails, see Section A.4, "Investigating Smart Card Authentication Failures" . 23.3.4. Authenticating to the Remote System from the Local System On the local system, perform these steps: Insert the smart card. Launch ssh , and specify the PKCS#11 library with the -I option: As an Identity Management user: As an Active Directory user: Optional. Use the id utility to check that you are logged in as the intended user. As an Identity Management user: As an Active Directory user: If the authentication fails, see Section A.4, "Investigating Smart Card Authentication Failures" . 23.3.5. Additional Resources Authentication using ssh with a smart card does not obtain a ticket-granting ticket (TGT) on the remote system. To obtain a TGT on the remote system, the administrator must configure Kerberos on the local system and enable Kerberos delegation. For an example of the required configuration, see this Kerberos knowledge base entry . For details on smart-card authentication with OpenSSH, see Using Smart Cards to Supply Credentials to OpenSSH in the Security Guide . | [
"ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh",
"chmod +x client_smart_card_script.sh",
"./client_smart_card_script.sh CA_cert.pem",
"ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate",
"systemctl restart httpd",
"client login: idm_user PIN for PIV Card Holder pin (PIV_II) for user [email protected]:",
"ssh -I /usr/lib64/opensc-pkcs11.so -l idm_user server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42",
"ssh -I /usr/lib64/opensc-pkcs11.so -l [email protected] server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42",
"id uid=1928200001(idm_user) gid=1928200001(idm_user) groups=1928200001(idm_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"id uid=1171201116([email protected]) gid=1171201116([email protected]) groups=1171201116([email protected]),1171200513(domain [email protected]) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/auth-idm-client-sc |
7.0 Release Notes | 7.0 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/index |
Chapter 13. Real-Time Kernel | Chapter 13. Real-Time Kernel About Red Hat Enterprise Linux for Real Time Kernel The Red Hat Enterprise Linux for Real Time Kernel is designed to enable fine-tuning for systems with extremely high determinism requirements. The major increase in the consistency of results can, and should, be achieved by tuning the standard kernel. The real-time kernel enables gaining a small increase on top of increase achieved by tuning the standard kernel. The real-time kernel is available in the rhel-7-server-rt-rpms repository. The Installation Guide contains the installation instructions and the rest of the documentation is available at Product Documentation for Red Hat Enterprise Linux for Real Time . kernel-rt sources updated The kernel-rt sources have been upgraded to be based on the latest Red Hat Enterprise Linux kernel source tree, which provides a number of bug fixes and enhancements over the version. (BZ# 1553351 ) The SCHED_DEADLINE scheduler class for real time kernel fully supported The SCHED_DEADLINE scheduler class for the real-time kernel, which was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview, is now fully supported. The scheduler enables predictable task scheduling based on application deadlines. SCHED_DEADLINE benefits periodic workloads by guaranteeing timing isolation, which is based not only on a fixed priority but also on the applications' timing requirements. (BZ#1297061) rt-entsk prevents IPI generation and delay of realtime tasks The chrony daemon, chronyd , enables or disables network timestamping, which activates a static key within the kernel. When a static key is enabled or disabled, three inter-processor interrupt (IPIs) are generated to notify other processors of the activation. Previously, rapid activation and deactivation of the chronyd static keys led to a delay of a realtime task. Consequently, a latency spike occurred. With this update, systemd starts the rt-entsk program, which keeps timestamping enabled and prevents the IPIs from being generated. As a result, IPI generation no longer occurs in a rapid succession, and realtime tasks are no longer delayed due to this bug. (BZ# 1616038 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_real-time_kernel |
Chapter 9. Configuring a basic overcloud with pre-provisioned nodes | Chapter 9. Configuring a basic overcloud with pre-provisioned nodes This chapter contains basic configuration procedures that you can use to configure a Red Hat OpenStack Platform (RHOSP) environment with pre-provisioned nodes. This scenario differs from the standard overcloud creation scenarios in several ways: You can provision nodes with an external tool and let the director control the overcloud configuration only. You can use nodes without relying on the director provisioning methods. This is useful if you want to create an overcloud without power management control, or use networks with DHCP/PXE boot restrictions. The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) to manage nodes. Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2 overcloud-full image. This scenario includes only basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications with the instructions in the Advanced Overcloud Customization guide. Important You cannot combine pre-provisioned nodes with director-provisioned nodes. 9.1. Pre-provisioned node requirements Before you begin deploying an overcloud with pre-provisioned nodes, ensure that the following configuration is present in your environment: The director node that you created in Chapter 4, Installing director on the undercloud . A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 8.4 installed as the host operating system. Red Hat recommends using the latest version available. One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration. One network connection for the Control Plane network. There are two main scenarios for this network: Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to director. The examples for this scenario use following IP address assignments: Table 9.1. Provisioning Network IP assignments Node name IP address Director 192.168.24.1 Controller 0 192.168.24.2 Compute 0 192.168.24.3 Using a separate network. In situations where the director's Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with director over the Public API endpoint. For more information about the requirements for this scenario, see Section 9.6, "Using a separate network for pre-provisioned nodes" . All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types. If any nodes use Pacemaker resources, the service user hacluster and the service group haclient must have a UID/GID of 189 . This is due to CVE-2018-16877 . If you installed Pacemaker together with the operating system, the installation creates these IDs automatically. If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate: backup_cib" to change the ID values. To prevent some services from binding to an incorrect IP address and causing deployment failures, make sure that the /etc/hosts file does not include the node-name=127.0.0.1 mapping. 9.2. Creating a user on pre-provisioned nodes When you configure an overcloud with pre-provisioned nodes, director requires SSH access to the overcloud nodes. On the pre-provisioned nodes, you must create a user with SSH key authentication and configure passwordless sudo access for that user. After you create a user on pre-provisioned nodes, you can use the --overcloud-ssh-user and --overcloud-ssh-key options with the openstack overcloud deploy command to create an overcloud with pre-provisioned nodes. By default, the values for the overcloud SSH user and overcloud SSH key are the stack user and ~/.ssh/id_rsa . To create the stack user, complete the following steps. Procedure On each overcloud node, create the stack user and set a password. For example, run the following commands on the Controller node: Disable password requirements for this user when using sudo : After you create and configure the stack user on all pre-provisioned nodes, copy the stack user's public SSH key from the director node to each overcloud node. For example, to copy the director's public SSH key to the Controller node, run the following command: Important To copy your SSH keys, you might have to temporarily set PasswordAuthentication Yes in the SSH configuration of each overcloud node. After you copy the SSH keys, set PasswordAuthentication No and use the SSH keys to authenticate in the future. 9.3. Registering the operating system for pre-provisioned nodes Each node requires access to a Red Hat subscription. Complete the following steps on each node to register your nodes with the Red Hat Content Delivery Network. Important Enable only the repositories listed. Additional repositories can cause package and software conflicts. Do not enable any additional repositories. Procedure Run the registration command and enter your Customer Portal user name and password when prompted: Find the entitlement pool for Red Hat OpenStack Platform 16.2: Use the pool ID located in the step to attach the Red Hat OpenStack Platform 16 entitlements: Disable all default repositories: Enable the required Red Hat Enterprise Linux repositories. For x86_64 systems: For POWER systems: Set the container-tools repository module to version 3.0 : If the overcloud uses Ceph Storage nodes, enable the relevant Ceph Storage repositories: Lock the RHEL version on all overcloud nodes except Red Hat Ceph Storage nodes: Update your system to ensure you have the latest base system packages: The node is now ready to use for your overcloud. 9.4. Configuring SSL/TLS access to director If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director's SSL/TLS certificates. If you use your own certificate authority, perform the following actions on each overcloud node. Procedure Copy the certificate authority file to the /etc/pki/ca-trust/source/anchors/ directory on each pre-provisioned node. Run the following command on each overcloud node: These steps ensure that the overcloud nodes can access the director's Public API over SSL/TLS. 9.5. Configuring networking for the control plane The pre-provisioned overcloud nodes obtain metadata from director using standard HTTP requests. This means all overcloud nodes require L3 access to either: The director Control Plane network, which is the subnet that you define with the network_cidr parameter in your undercloud.conf file. The overcloud nodes require either direct access to this subnet or routable access to the subnet. The director Public API endpoint, that you specify with the undercloud_public_host parameter in your undercloud.conf file. This option is available if you do not have an L3 route to the Control Plane or if you want to use SSL/TLS communication. For more information about configuring your overcloud nodes to use the Public API endpoint, see Section 9.6, "Using a separate network for pre-provisioned nodes" . Director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes. Using network isolation You can use network isolation to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies in the the Advanced Overcloud Customization guide. You can also define specific IP addresses for nodes on the Control Plane. For more information about isolating networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide: "Basic network isolation" "Controlling Node Placement" Note If you use network isolation, ensure that your NIC templates do not include the NIC used for undercloud access. These templates can reconfigure the NIC, which introduces connectivity and configuration problems during deployment. Assigning IP addresses If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If you are using the director Provisioning network as the Control Plane, ensure that the overcloud IP addresses that you choose are outside of the DHCP ranges for both provisioning ( dhcp_start and dhcp_end ) and introspection ( inspection_iprange ). During standard overcloud creation, director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause director to assign different IP addresses to the ones that you configure manually for each node. In this situation, use a predictable IP address strategy to force director to use the pre-provisioned IP assignments on the Control Plane. For example, you can use an environment file ctlplane-assignments.yaml with the following IP assignments to implement a predictable IP strategy: In this example, the OS::TripleO::DeployedServer::ControlPlanePort resource passes a set of parameters to director and defines the IP assignments of your pre-provisioned nodes. Use the DeployedServerPortMap parameter to define the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines the following attributes: The name of the assignment, which follows the format <node_hostname>-<network> where the <node_hostname> value matches the short host name for the node, and <network> matches the lowercase name of the network. For example: controller-0-ctlplane for controller-0.example.com and compute-0-ctlplane for compute-0.example.com . The IP assignments, which use the following parameter patterns: fixed_ips/ip_address - Defines the fixed IP addresses for the control plane. Use multiple ip_address parameters in a list to define multiple IP addresses. subnets/cidr - Defines the CIDR value for the subnet. A later section in this chapter uses the resulting environment file ( ctlplane-assignments.yaml ) as part of the openstack overcloud deploy command. 9.6. Using a separate network for pre-provisioned nodes By default, director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API. There are several requirements for this scenario: The overcloud nodes must accommodate the basic network configuration from Section 9.5, "Configuring networking for the control plane" . You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Section 4.2, "Director configuration parameters" and Chapter 20, Configuring custom SSL/TLS certificates . You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the undercloud_public_host parameter in the undercloud.conf file to set this FQDN. The examples in this section use IP address assignments that differ from the main scenario: Table 9.2. Provisioning network IP assignments Node Name IP address or FQDN Director (Internal API) 192.168.24.1 (Provisioning Network and Control Plane) Director (Public API) 10.1.1.1 / director.example.com Overcloud Virtual IP 192.168.100.1 Controller 0 192.168.100.2 Compute 0 192.168.100.3 The following sections provide additional configuration for situations that require a separate network for overcloud nodes. IP address assignments The method for IP assignments is similar to Section 9.5, "Configuring networking for the control plane" . However, since the Control Plane is not routable from the deployed servers, you must use the DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following example is a modified version of the ctlplane-assignments.yaml environment file from Section 9.5, "Configuring networking for the control plane" that accommodates this network architecture: 1 The RedisVipPort and OVNDBsVipPort resources are mapped to network/ports/noop.yaml . This mapping is necessary because the default Redis and OVNDBs VIP addresses come from the Control Plane. In this situation, use a noop to disable this Control Plane mapping. 9.7. Mapping pre-provisioned node hostnames When you configure pre-provisioned nodes, you must map heat-based hostnames to their actual hostnames so that ansible-playbook can reach a resolvable host. Use the HostnameMap to map these values. Procedure Create an environment file, for example hostname-map.yaml , and include the HostnameMap parameter and the hostname mappings. Use the following syntax: The [HEAT HOSTNAME] usually conforms to the following convention: [STACK NAME]-[ROLE]-[INDEX] : Save the hostname-map.yaml file. 9.8. Configuring Ceph Storage for pre-provisioned nodes Complete the following steps on the undercloud host to configure ceph-ansible for nodes that are already deployed. Procedure On the undercloud host, create an environment variable, OVERCLOUD_HOSTS , and set the variable to a space-separated list of IP addresses of the overcloud hosts that you want to use as Ceph clients: The default overcloud plan name is overcloud . If you use a different name, create an environment variable OVERCLOUD_PLAN to store your custom name: Replace <custom-stack-name> with the name of your stack. Run the enable-ssh-admin.sh script to configure a user on the overcloud nodes that Ansible can use to configure Ceph clients: When you run the openstack overcloud deploy command, Ansible configures the hosts that you define in the OVERCLOUD_HOSTS variable as Ceph clients. 9.9. Creating the overcloud with pre-provisioned nodes The overcloud deployment uses the standard CLI methods from Section 7.14, "Deployment command" . For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core heat template collection: --disable-validations - Use this option to disable basic CLI validations for services not used with pre-provisioned infrastructure. If you do not disable these validations, the deployment fails. environments/deployed-server-environment.yaml - Include this environment file to create and configure the pre-provisioned infrastructure. This environment file substitutes the OS::Nova::Server resources with OS::Heat::DeployedServer resources. The following command is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture: The --overcloud-ssh-user and --overcloud-ssh-key options are used to SSH into each overcloud node during the configuration stage, create an initial tripleo-admin user, and inject an SSH key into /home/tripleo-admin/.ssh/authorized_keys . To inject the SSH key, specify the credentials for the initial SSH connection with --overcloud-ssh-user and --overcloud-ssh-key (defaults to ~/.ssh/id_rsa ). To limit exposure to the private key that you specify with the --overcloud-ssh-key option, director never passes this key to any API service, such as heat or the Workflow service (mistral), and only the director openstack overcloud deploy command uses this key to enable access for the tripleo-admin user. 9.10. Overcloud deployment output When the overcloud creation completes, director provides a recap of the Ansible plays that were executed to configure the overcloud: Director also provides details to access your overcloud. 9.11. Accessing the overcloud Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc , in the home directory of the stack user. Run the following command to use this file: This command loads the environment variables that are necessary to interact with your overcloud from the undercloud CLI. The command prompt changes to indicate this: To return to interacting with the undercloud, run the following command: 9.12. Scaling pre-provisioned nodes The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 16, Scaling overcloud nodes . However, the process to add new pre-provisioned nodes differs because pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova). Scaling up pre-provisioned nodes When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director node count. Perform the following actions to scale up overcloud nodes: Prepare the new pre-provisioned nodes according to Section 9.1, "Pre-provisioned node requirements" . Scale up the nodes. For more information, see Chapter 16, Scaling overcloud nodes . After you execute the deployment command, wait until the director creates the new node resources and launches the configuration. Scaling down pre-provisioned nodes When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions in Chapter 16, Scaling overcloud nodes . In scale-down operations, you can use hostnames for both OSP provisioned or pre-provisioned nodes. You can also use the UUID for OSP provisioned nodes. However, there is no UUID for pre-provisoned nodes, so you always use hostnames. Pass the hostname or UUID value to the openstack overcloud node delete command. Procedure Identify the name of the node that you want to remove. Pass the corresponding node name from the stack_name column to the openstack overcloud node delete command: Replace <overcloud> with the name or UUID of the overcloud stack. Replace <stack_name> with the name of the node that you want to remove. You can include multiple node names in the openstack overcloud node delete command. Ensure that the openstack overcloud node delete command runs to completion: The status of the overcloud stack shows UPDATE_COMPLETE when the delete operation is complete. After you remove overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shut down these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment. After you power off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future Note Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages. Removing a pre-provisioned overcloud To remove an entire overcloud that uses pre-provisioned nodes, see Section 12.6, "Removing the overcloud" for the standard overcloud remove procedure. After you remove the overcloud, power off all nodes and reprovision them to a base operating system configuration. Note Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages. | [
"useradd stack passwd stack # specify a password",
"echo \"stack ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack",
"[stack@director ~]USD ssh-copy-id [email protected]",
"[heat-admin@controller-0 ~]USD sudo subscription-manager register",
"[heat-admin@controller-0 ~]USD sudo subscription-manager list --available --all --matches=\"Red Hat OpenStack\"",
"[heat-admin@controller-0 ~]USD sudo subscription-manager attach --pool=pool_id",
"[heat-admin@controller-0 ~]USD sudo subscription-manager repos --disable=*",
"[heat-admin@controller-0 ~]USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=openstack-16.2-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms",
"[heat-admin@controller-0 ~]USD sudo subscription-manager repos --enable=rhel-8-for-ppc64le-baseos-rpms --enable=rhel-8-for-ppc64le-appstream-rpms --enable=rhel-8-for-ppc64le-highavailability-rpms --enable=ansible-2.8-for-rhel-8-ppc64le-rpms --enable=openstack-16-for-rhel-8-ppc64le-rpms --enable=fast-datapath-for-rhel-8-ppc64le-rpms",
"[heat-admin@controller-0 ~]USD sudo dnf module disable -y container-tools:rhel8 [heat-admin@controller-0 ~]USD sudo dnf module enable -y container-tools:3.0",
"[heat-admin@cephstorage-0 ~]USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=openstack-16.2-deployment-tools-for-rhel-8-x86_64-rpms",
"[heat-admin@controller-0 ~]USD sudo subscription-manager release --set=8.4",
"[heat-admin@controller-0 ~]USD sudo dnf update -y [heat-admin@controller-0 ~]USD sudo reboot",
"sudo update-ca-trust extract",
"resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml parameter_defaults: DeployedServerPortMap: controller-0-ctlplane: fixed_ips: - ip_address: 192.168.24.2 subnets: - cidr: 192.168.24.0/24 network: tags: 192.168.24.0/24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.24.3 subnets: - cidr: 192.168.24.0/24 network: tags: - 192.168.24.0/24",
"resource_registry: OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml OS::TripleO::Network::Ports::OVNDBsVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml 1 parameter_defaults: NeutronPublicInterface: eth1 DeployedServerPortMap: control_virtual_ip: fixed_ips: - ip_address: 192.168.100.1 subnets: - cidr: 24 controller-0-ctlplane: fixed_ips: - ip_address: 192.168.100.2 subnets: - cidr: 24 compute-0-ctlplane: fixed_ips: - ip_address: 192.168.100.3 subnets: - cidr: 24",
"parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]",
"parameter_defaults: HostnameMap: overcloud-controller-0: controller-00-rack01 overcloud-controller-1: controller-01-rack02 overcloud-controller-2: controller-02-rack03 overcloud-novacompute-0: compute-00-rack01 overcloud-novacompute-1: compute-01-rack01 overcloud-novacompute-2: compute-02-rack01",
"export OVERCLOUD_HOSTS=\"192.168.1.8 192.168.1.42\"",
"export OVERCLOUD_PLAN=\"<custom-stack-name>\"",
"bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --disable-validations -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml -e /home/stack/templates/hostname-map.yaml --overcloud-ssh-user stack --overcloud-ssh-key ~/.ssh/id_rsa <OTHER OPTIONS>",
"PLAY RECAP ************************************************************* overcloud-compute-0 : ok=160 changed=67 unreachable=0 failed=0 overcloud-controller-0 : ok=210 changed=93 unreachable=0 failed=0 undercloud : ok=10 changed=7 unreachable=0 failed=0 Tuesday 15 October 2018 18:30:57 +1000 (0:00:00.107) 1:06:37.514 ****** ========================================================================",
"Ansible passed. Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed",
"(undercloud) USD source ~/overcloudrc",
"(overcloud) USD",
"(overcloud) USD source ~/stackrc (undercloud) USD",
"openstack stack resource list overcloud -n5 --filter type=OS::TripleO::ComputeDeployedServerServer",
"openstack overcloud node delete --stack <overcloud> <stack>",
"openstack stack list"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_configuring-a-basic-overcloud-with-pre-provisioned-nodes |
Chapter 35. Configuring TLS for Identity Management | Chapter 35. Configuring TLS for Identity Management This document describes how to configure an Identity Management server to require the TLS protocol version 1.2 in Red Hat Enterprise Linux 7.3 and later. TLS 1.2 is considered more secure than versions of TLS. If your IdM server is deployed in an environment with high security requirements, you can configure it to forbid communication using protocols that are less secure than TLS 1.2. Important Repeat these steps on every IdM server where you want to use TLS 1.2. 35.1. Configuring the httpd Daemon Open the /etc/httpd/conf.d/nss.conf file, and set the following values for the NSSProtocol and NSSCipherSuite entries: Alternatively, use the following commands to set the values for you: Restart the httpd daemon: | [
"NSSProtocol TLSv1.2 NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha",
"sed -i 's/^NSSProtocol .*/NSSProtocol TLSv1.2/' /etc/httpd/conf.d/nss.conf sed -i 's/^NSSCipherSuite .*/NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha/' /etc/httpd/conf.d/nss.conf",
"systemctl restart httpd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-tls |
Chapter 8. Kubernetes | Chapter 8. Kubernetes This guide describes different ways to configure and deploy a Camel Quarkus application on kubernetes. It also describes some specific use cases for Knative and Service Binding. 8.1. Kubernetes Quarkus supports generating resources for vanilla Kubernetes, OpenShift and Knative. Furthermore, Quarkus can deploy the application to a target Kubernetes cluster by applying the generated manifests to the target cluster's API Server. For more information, see the Quarkus Kubernetes guide . 8.2. Knative The Camel Quarkus extensions whose consumers support Knative deployment are: camel-quarkus-grpc camel-quarkus-knative camel-quarkus-platform-http camel-quarkus-rest camel-quarkus-servlet camel-quarkus-telegram with webhook camel-quarkus-vertx-websocket 8.3. Service binding Quarkus also supports the Service Binding Specification for Kubernetes to bind services to applications. The following Camel Quarkus extensions can be used with Service Binding: camel-quarkus-kafka | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-reference-kubernetes |
Installing and configuring central authentication for the Ansible Automation Platform | Installing and configuring central authentication for the Ansible Automation Platform Red Hat Ansible Automation Platform 2.4 Enable central authentication functions for your Ansible Automation Platform Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/index |
9. Devices and Device Drivers | 9. Devices and Device Drivers PCI Device Ordering In Red Hat Enterprise Linux 6, the PCI device ordering is based on the PCI device enumeration. PCI device enumeration is based on the PCI enumeration algorithm (depth first then breadth) and is constant per system type. Additionally, once the devices are discovered, the module loading process is sequentialized, providing persistent naming of the interfaces. 9.1. Technology Previews Brocade BFA Driver The Brocade BFA driver is considered a Technology Preview feature in Red Hat Enterprise Linux 6. The BFA driver supports Brocade FibreChannel and FCoE mass storage adapters. SR-IOV on the be2net driver The SR-IOV functionality of the Emulex be2net driver is considered a Technology Preview in Red Hat Enterprise Linux 6. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/devices |
Chapter 2. Creating and managing a service account | Chapter 2. Creating and managing a service account Use service accounts to securely and automatically connect and authenticate services or applications without requiring an end user's credentials or direct interaction. When you create a Red Hat service account, you generate a client ID and a secret . The service account uses the ID and secret to access services on the Red Hat Hybrid Cloud Console . Client ID The client ID identifies the service account to the resource, much like a username identifies a user. Secret The secret provides a similar function as does a password. The secret appears once when you create the service account. Copy and save the secret and protect it as you would any password. After you create a service account, you add it to the applicable User Access group. (User Access is the Red Hat implementation of role-based access control.) The roles assigned to a User Access group determine the level of access the service account has to applications and services on the Red Hat Hybrid Cloud Console . The following tasks show you how to create service accounts and add them to a User Access group: Section 2.1, "Creating a service account" Section 2.2, "Adding service accounts to a User Access group" Section 2.3, "Deleting service accounts from a User Access group" You can perform the following tasks after you generate a client ID and a secret for a service account: Section 2.4, "Resetting a service account secret" Section 2.5, "Deleting a service account" You must be the owner of a service account if you want to reset it or delete it. The Organization Administrator can reset or delete any service account. Additional resources User Access Configuration Guide for Role-based Access Control (RBAC) 2.1. Creating a service account You can create a service account and generate the client ID and secret to use with that account. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console . Procedure From the Red Hat Hybrid Cloud Console , Click the settings icon (β) and click Service Accounts . Click Create service account to set up the account. Enter a Service account name and a Short description and click Create . Copy the generated Client ID and Client secret values to a secure location. You'll specify these credentials when configuring a connection to a service. Important The Client secret is displayed only once, so ensure that you've successfully and securely saved the copied credentials before closing the credentials window. After you save the Client ID and secret to a secure location, select the confirmation check box in the credentials window and close the window. The service account and its Client ID appear on the Service Accounts page. 2.2. Adding service accounts to a User Access group The Organization Administrator adds a service account to a User Access group that has the permissions that allow a service account to access services and applications on the Red Hat Hybrid Cloud Console . Any user can create a service account but only the Organization Administrator or a User Access administrator can add service accounts to groups. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as the Organization Administrator or as a user with User Access administrator permissions. One or more service accounts are associated with your Red Hat organization account. Section 2.1, "Creating a service account" Procedure From the Red Hat Hybrid Cloud Console , click the settings icon (β) and click User Access . To add the service account to a preexisting group, click the Groups tab and click the name of the group that you want to add the service account to. When the group name window appears, click the Service accounts tab. Click Add service account . A list appears of all service accounts associated with your Red Hat organization account. Click the service accounts you want to add to the User Access group and click Add to group . The service accounts appear on the Service accounts tab. Additional resources User Access Configuration Guide for Role-based Access Control (RBAC) Section 2.3, "Deleting service accounts from a User Access group" 2.3. Deleting service accounts from a User Access group The Organization Administrator can delete a service account from a User Access group on the Red Hat Hybrid Cloud Console . Any user can create a service account but only the Organization Administrator or a User Access administrator can delete service accounts from groups. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as the Organization Administrator or as a user with User Access administrator permissions. One or more service accounts are associated with your Red Hat organization account. Section 2.1, "Creating a service account" Procedure From the Red Hat Hybrid Cloud Console , click the settings icon (β) and click User Access . To delete the service account from a group, click the Groups tab and click the name of the group that includes the service account. When the group name window appears, click the Service accounts tab. All service accounts in that group appear. Remove a single service account. Click the options icon (...) in the Name row and click Remove . Acknowledge the Remove service account? message and click Remove service account . Remove multiple service accounts. Click the check box to each account to remove. Click the options icon (...) in any Name row of the selected service accounts and click Remove . Acknowledge the Remove service account? message and click Remove service account . Verify that the selected service account does not appear on the Service accounts tab. Additional resources Section 2.2, "Adding service accounts to a User Access group" 2.4. Resetting a service account secret You can reset the secret for a service account. When you do so, the client ID does not change. You must be the owner of a service account if you want to reset it or delete it. The Organization Administrator user can reset or delete any service account. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console . Procedure From the Red Hat Hybrid Cloud Console , Click the settings icon (β) and click Service Accounts . On the list of existing service accounts, select the service account you want to reset and click the options icon (...). Verify that you want to reset this account and click Reset credentials . Copy the updated Client secret values to a secure location. You'll specify these credentials when configuring a connection to a service. Important The generated credentials are displayed only once, so ensure that you've successfully and securely saved the copied credentials before closing the credentials window. After you save the generated credentials to a secure location, select the confirmation check box in the credentials window and close the window. 2.5. Deleting a service account You can delete a service account. You must be the owner of a service account if you want to reset it or delete it. The Organization Administrator user can reset or delete any service account. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console . Procedure From the Red Hat Hybrid Cloud Console , Click the settings icon (β) and click Service Accounts . Identify the service account you want to delete and click the options icon (...). Verify that you want to delete this account and click Delete service account . | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct |
Chapter 6. Bug fixes | Chapter 6. Bug fixes This section describes the notable bug fixes introduced in Red Hat OpenShift Data Foundation 4.16. 6.1. Disaster recovery FailOver of applications are hung in FailingOver state Previously, applications were not DR protected successfully because of the errors in protecting required resources to the provided S3 stores. So, failing over such applications resulted in FailingOver state. With this fix, a metric and a related alert is added to the application DR protection health that shows an alert to rectify protection issues after DR protects the applications. As a result, the applications that are successfully protected are failed over. ( BZ#2248723 ) Post hub recovery, applications which were in FailedOver state consistently report FailingOver Previously, the Ramen hub operator on a recovered hub cluster reported the status of a managed cluster that survived a loss of both the hub and its peer managed cluster as Ready for the future failover actions without ensuring if the surviving cluster is reporting such a status. With this fix, Ramen hub operator ensures if the target cluster is ready for a failover operation before initiating the action. As a result, any failover initiated is successful or if stale resources still exist on the failover target cluster, the operator stalls the failover till the stale resources are cleaned up. ( BZ#2270259 ) 6.2. Multicloud Object Gateway Multicloud Object Gateway (MCG) DB PVC consumption more than 400GB Previously, the Multicloud Object Gateway (MCG) Database (DB) showed increased DB size when it was not necessary as activity logs were being saved to the DB. With this fix, the object activity logs are converted to regular debug logs. As a result, NooBaa DB no longer shows increased DB size. ( BZ#2141422 ) Log based replication works even after removing the replication policy from the OBC Previously, it was not possible to remove a log based replication policy from the object bucket claims (OBCs) as replication policy evaluation resulted in an error when presented with an empty string. With this fix, replication policy evaluation method is modified to enable removal of replication policy from the OBCs. ( BZ#2266805 ) Multicloud Object Gateway (MCG) component security context fix Previously, when the default security context constraint (SCC) for the Multicloud Object Gateway (MCG) pods was updated to avoid the defaults set by OpenShift, the security scanning process failed. With this fix, when the SCC is updated to override the defaults, MCG's behaviour does not change so that the security scan passes. ( BZ#2273670 ) NooBaa operator logs expose AWS secret Previously, the NooBaa operator logs exposed the AWS secret as plain text, which caused potential risk that anyone with logs could access the buckets. With this fix, noobaa-operator logs no longer expose AWS secret. ( BZ#2277186 ) AWS S3 list takes a long time Previously, AWS S3 took a long time to list the objects as two database queries were used instead of a single one. With this fix, the queries are restructured into a single one, thereby reducing the calls to the database and time to complete the list objects operation. ( BZ#2277990 ) After upgrade to OpenShift Data Foundation the standalone MCG backing store gets rejected Previously, when trying to use a persistent volume (PV) pool, xattr is used to save metadata of objects. However, updates to that metadata fails as Filesystem on PV does not support xattr . With this fix, there is a failback if Filesystem does not support xattr and metadata is saved in a file. ( BZ#2278389 ) Multicloud Object Gateway database persistent volume claim (PVC) consumption rising continuously Previously, object bucket claim (OBC) deletion, which resulted in deletion of all its objects took time to free up from the database. This was because of the limited work by MCG's database cleaner causing a slow and limited deletion of entries from the database. With this fix, updating the DB Cleaner configuration for MCG is possible. The DB Cleaner is a process that removes old deleted entries from the MCG Database. The exposed configurations are frequency of runs and age of entries to be deleted. ( BZ#2279742 ) Multicloud Object Gateway bucket lifecycle policy does not delete all objects Previously, the velocity of deletion of expired objects was very low. With this fix, the batch size and number of runs per day to delete expired objects is increased. ( BZ#2279964 ) ( BZ#2283753 ) HEAD-request returns the HTTP 200 Code for the prefix path instead of 404 from the API Previously, when trying to read or head an object which is a directory on the NamespaceStore Filesystem bucket of Multicloud Object Gateway, if the trailing / character is missing, the request returned HTTP 200 code for the prefix path instead of 404. With this fix, ENOENT is returned when the object is a directory but the key is missing the trailing / . ( BZ#2280664 ) Multicloud Object Gateway Backingstore In Phase: "Connecting" with "Invalid URL" Previously, the operator failed to get the system information in the reconciliation loop which prevented the successful completion of the reconciliation. This was due to a bug in the URL parsing that caused the parsing to fail when the address was IPv6. With this fix, the case of IPv6 address is handled as the URL host. As a result, the operator successfully completes the system reconciliation. ( BZ#2284652 ) 6.3. Ceph container storage interface (CSI) driver PVC cloning failed with an error "RBD image not found" Previously, restore of volume snapshot failed when the parent of the snapshot did not exist as the CephCSI driver falsely identified an RBD image in trash to exist due to a bug in the driver. With this fix, the CephCSI driver bug is fixed to identify the images in trash appropriately and as a result, the volume snapshot is restored successfully even when the parent of the snapshot does not exist. ( BZ#2264900 ) Incorrect warning logs from fuserecovery.go even when FUSE mount is not used Previously, the warning logs from the fuse recovery functions, such as fuserecovery.go were logged even when the kernel mounter was chosen and that was misleading. With this fix, the fuse recovery functions are attempted or called only when the fuse mounter is chosen and as a result, the logs from fuesreovery.go are not logged when the kernel mounter is chosen. ( BZ#2266237 ) 6.4. OCS Operator StorageClasses are not created if the RGW endpoint is not reachable Previously, storage classes were dependent on RADOS gateway (RGW) storage class creation as RADOS Block Device (RBD) and CephFS storage classes were not created if the RGW endpoint was not reachable. With this fix, the storage class creation is made independent and as a result storage classes are no longer dependent on RGW storage class creation. ( BZ#2213757 ) 6.5. OpenShift Data Foundation console Status card reflects the status of standalone MCG deployment Previously, Multicloud Object Gateway (MCG) standalone mode was not showing any health status in OpenShift cluster Overview dashboard and an unknown icon was seen for Storage. With this fix, MCG health metrics are pushed when the cluster is deployed in standalone mode and as a result the storage health is shown in cluster Overview dashboard. ( BZ#2256563 ) Create StorageSystem wizard overlaps Project dropdown Previously, the unused Project dropdown on top of the Create StorageSystem page caused confusion and was not used in any scenario. With this fix, the Project dropdown is removed and as a result the StorageSystem creation namespace is populated in the header of the page. ( BZ#2271593 ) Capacity and Utilization cards do not include custom storage classes Previously, the Requested capacity and Utilization cards displayed only data for the default storage classes created by the OCS operator as part of the storage system creation. The cards do not include any custom storage classes that were created later. This was due to the refactoring of the prometheus to support multiple storage clusters. With this fix, the queries are updated and the cards now show report capacity for both default and custom created storage classes. ( BZ#2284090 ) 6.6. Rook Rook-Ceph operator deployment fail when storage class device sets are deployed with duplicate names Previously, when StorageClassDeviceSets were added into the StorageCluster CR with duplicate names, the OSDs failed leaving Rook confused about the OSD configuration. With this fix, if the duplicate device set names are found in the CR, Rook refuses to reconcile the OSDs until it is fixed. An error is seen in the rook operator log about failing to reconcile the OSDs. ( BZ#2259209 ) Rook-ceph-mon pods listen to both 3300 and 6789 port Previously, when a cluster was deployed with MSGRv2, the mon pods were listening unnecessarily on port 6789 for MSGR1 traffic. With this fix, the mon daemons start with flags to suppress listening on the v1 port 6789 and only listen exclusively on the v2 port 3300 thereby reducing the attack surface area. ( BZ#2262134 ) Legacy LVM-based OSDs are in crashloop state Previously, starting from OpenShift Data Foundation 4.14, the legacy OSDs were crashing in the init container that resized the OSD. This was because the legacy OSDs that were created in OpenShift Container Storage 4.3 and since upgraded to a future version might have failed. With this fix, the crashing resize init container was removed from the OSD pod spec. As a result, the legacy OSD starts, however it is recommended that the legacy OSDs are replaced soon. ( BZ#2273398 ) ( BZ#2274757 ) 6.7. Ceph monitoring Quota alerts overlapping Previously, redundant alerts were fired when object bucket claim (OBC) quota limit was reached. This is because when OBC quota reached 100%, both ObcQuotaObjectsAlert (when OBC object quota crosses 80% of its limit) and ObcQuotaObjectsExhausedAlert (when quota reaches 100%) alerts were fired. With this fix, the queries of the alerts were changed to make sure that only one alert is triggered at a time indicating the issue. As a result, when the quota crosses 80%, ObcQuotaObjectsAlert is triggered and when quota is at 100%, ObcQuotaObjectsExhausedAlert is triggered. ( BZ#2257949X ) PrometheusRule evaluation failing for pool-quota rule Previously, none of the Ceph pool quota alerts were displayed because in a multi-cluster setup, PrometheusRuleFailures alert was fired due to pool-quota rules. The queries in the pool-quota section were unable to distinguish the cluster from which the alert was fired in a multi-cluster setup. With this fix, a managedBy label was added to all the queries in the pool-quota to generate unique results from each cluster. As a result, PrometheusRuleFailures alert is no longer seen and all the alerts in pool-quota work as expected. ( BZ#2262943 ) Wrong help text shown in runbooks for some alerts Previously, wrong help text was shown in the runbooks for some alerts as there was wrong text in runbook markdown files of those alerts. With this fix, the text in the runbook markdown files is corrected so that the alerts show the correct help text. ( BZ#2265492 ) PrometheusRuleFailures alert after installation or upgrade Previously, Ceph quorum related alerts were not seen as prometheus failure alert, PrometheusRuleFailures was fired, which is usually fired when the queries produced ambiguous results. In a multi-cluster scenario, queries in quorum-alert rules were giving indistinguishable results, as it could not identify from which cluster the quorum alerts were fired. With this fix, a unique managedBy label was added to each query in quorum rules so that the query results contained the data about the cluster name from which the result was received. As a result, prometheus failure is not fired and the clusters are able to trigger all the Ceph mon quorum related alerts. ( BZ#2266316 ) Low default interval duration for two ServiceMonitors, rook-ceph-exporter and rook-ceph-mgr Previously, the exporter data collected by prometheus added load to the system as the prometheus scrapePVC interval provided for service monitors, rook-ceph-exporter and rook-ceph-mgr was only 5 seconds. With this fix, the interval is increased to 30 seconds to balance the prometheus scrapping, thereby reducing the system load. ( BZ#2269354 ) Alert when there are LVM backed legacy OSDs during upgrade Previously, when OpenShift Data Foundation with legacy OSDs was upgraded from version 4.12 to 4.14, it was noticed that all the OSDs were stuck in a crash loop and down. This led to potential data unavailability and service disruption. With this fix, a check is included to detect legacy OSDs based on local volume manager (LVM) and to alert if such OSDs are present during the upgrade process. As a result, a warning is displayed during upgrade to indicate about the legacy OSDs so that appropriate actions can be taken. ( BZ#2279928 ) | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/bug_fixes |
Chapter 70. stack | Chapter 70. stack This chapter describes the commands under the stack command. 70.1. stack abandon Abandon stack and output results. Usage: Table 70.1. Positional arguments Value Summary <stack> Name or id of stack to abandon Table 70.2. Command arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output abandon results Table 70.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.2. stack adopt Adopt a stack. Usage: Table 70.7. Positional arguments Value Summary <stack-name> Name of the stack to adopt Table 70.8. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --timeout <timeout> Stack creation timeout in minutes --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. can be specified multiple times --wait Wait until stack adopt completes --adopt-file <adopt-file> Path to adopt stack data file Table 70.9. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.11. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.3. stack cancel Cancel current task for a stack. Supported tasks for cancellation: * update * create Usage: Table 70.13. Positional arguments Value Summary <stack> Stack(s) to cancel (name or id) Table 70.14. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for cancel to complete --no-rollback Cancel without rollback Table 70.15. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.16. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.4. stack check Check a stack. Usage: Table 70.19. Positional arguments Value Summary <stack> Stack(s) to check update (name or id) Table 70.20. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for check to complete Table 70.21. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.22. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.5. stack create Create a stack. Usage: Table 70.25. Positional arguments Value Summary <stack-name> Name of the stack to create Table 70.26. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times -s <files-container>, --files-container <files-container> Swift files container name. local files other than root template would be ignored. If other files are not found in swift, heat engine would raise an error. --timeout <timeout> Stack creating timeout in minutes --pre-create <resource> Name of a resource to set a pre-create hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --enable-rollback Enable rollback on create/update failure --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter values would be the content of the file --wait Wait until stack goes to create_complete or CREATE_FAILED --poll SECONDS Poll interval in seconds for use with --wait, defaults to 5. --tags <tag1,tag2... > A list of tags to associate with the stack --dry-run Do not actually perform the stack create, but show what would be created -t <template>, --template <template> Path to the template Table 70.27. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.28. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.29. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.6. stack delete Delete stack(s). Usage: Table 70.31. Positional arguments Value Summary <stack> Stack(s) to delete (name or id) Table 70.32. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) --wait Wait for stack delete to complete 70.7. stack environment show Show a stack's environment. Usage: Table 70.33. Positional arguments Value Summary <NAME or ID> Name or id of stack to query Table 70.34. Command arguments Value Summary -h, --help Show this help message and exit Table 70.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.8. stack event list List events. Usage: Table 70.39. Positional arguments Value Summary <stack> Name or id of stack to show events for Table 70.40. Command arguments Value Summary -h, --help Show this help message and exit --resource <resource> Name of resource to show events for. note: this cannot be specified with --nested-depth --filter <key=value> Filter parameters to apply on returned events --limit <limit> Limit the number of events returned --marker <id> Only return events that appear after the given id --nested-depth <depth> Depth of nested stacks from which to display events. Note: this cannot be specified with --resource --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple keys. Sort key can be: "event_time" (default), "resource_name", "links", "logical_resource_id", "resource_status", "resource_status_reason", "physical_resource_id", or "id". You can leave the key empty and specify ":desc" for sorting by reverse time. --follow Print events until process is halted Table 70.41. Output formatter options Value Summary -f {csv,json,log,table,value,yaml}, --format {csv,json,log,table,value,yaml} The output format, defaults to log -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.42. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.43. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.9. stack event show Show event details. Usage: Table 70.45. Positional arguments Value Summary <stack> Name or id of stack to show events for <resource> Name of the resource event belongs to <event> Id of event to display details for Table 70.46. Command arguments Value Summary -h, --help Show this help message and exit Table 70.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.10. stack export Export stack data json. Usage: Table 70.51. Positional arguments Value Summary <stack> Name or id of stack to export Table 70.52. Command arguments Value Summary -h, --help Show this help message and exit --output-file <output-file> File to output export data Table 70.53. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.55. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.56. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.11. stack failures list Show information about failed stack resources. Usage: Table 70.57. Positional arguments Value Summary <stack> Stack to display (name or id) Table 70.58. Command arguments Value Summary -h, --help Show this help message and exit --long Show full deployment logs in output 70.12. stack file list Show a stack's files map. Usage: Table 70.59. Positional arguments Value Summary <NAME or ID> Name or id of stack to query Table 70.60. Command arguments Value Summary -h, --help Show this help message and exit Table 70.61. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.62. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.63. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.64. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.13. stack hook clear Clear resource hooks on a given stack. Usage: Table 70.65. Positional arguments Value Summary <stack> Stack to display (name or id) <resource> Resource names with hooks to clear. resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource`` Table 70.66. Command arguments Value Summary -h, --help Show this help message and exit --pre-create Clear the pre-create hooks --pre-update Clear the pre-update hooks --pre-delete Clear the pre-delete hooks 70.14. stack hook poll List resources with pending hook for a stack. Usage: Table 70.67. Positional arguments Value Summary <stack> Stack to display (name or id) Table 70.68. Command arguments Value Summary -h, --help Show this help message and exit --nested-depth <nested-depth> Depth of nested stacks from which to display hooks Table 70.69. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.70. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.72. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.15. stack list List stacks. Usage: Table 70.73. Command arguments Value Summary -h, --help Show this help message and exit --deleted Include soft-deleted stacks in the stack listing --nested Include nested stacks in the stack listing --hidden Include hidden stacks in the stack listing --property <key=value> Filter properties to apply on returned stacks (repeat to filter on multiple properties) --tags <tag1,tag2... > List of tags to filter by. can be combined with --tag- mode to specify how to filter tags --tag-mode <mode> Method of filtering tags. must be one of "any", "not", or "not-any". If not specified, multiple tags will be combined with the boolean AND expression --limit <limit> The number of stacks returned --marker <id> Only return stacks that appear after the given id --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc). Specify multiple times to sort on multiple properties --all-projects Include all projects (admin only) --short List fewer fields in output --long List additional fields in output, this is implied by --all-projects Table 70.74. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.75. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.76. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.77. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.16. stack output list List stack outputs. Usage: Table 70.78. Positional arguments Value Summary <stack> Name or id of stack to query Table 70.79. Command arguments Value Summary -h, --help Show this help message and exit Table 70.80. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.81. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.83. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.17. stack output show Show stack output. Usage: Table 70.84. Positional arguments Value Summary <stack> Name or id of stack to query <output> Name of an output to display Table 70.85. Command arguments Value Summary -h, --help Show this help message and exit --all Display all stack outputs Table 70.86. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.87. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.88. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.89. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.18. stack resource list List stack resources. Usage: Table 70.90. Positional arguments Value Summary <stack> Name or id of stack to query Table 70.91. Command arguments Value Summary -h, --help Show this help message and exit --long Enable detailed information presented for each resource in resource list -n <nested-depth>, --nested-depth <nested-depth> Depth of nested stacks from which to display resources --filter <key=value> Filter parameters to apply on returned resources based on their name, status, type, action, id and physical_resource_id Table 70.92. Output formatter options Value Summary -f {csv,dot,json,table,value,yaml}, --format {csv,dot,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.93. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.19. stack resource mark unhealthy Set resource's health. Usage: Table 70.96. Positional arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resource reason Reason for state change Table 70.97. Command arguments Value Summary -h, --help Show this help message and exit --reset Set the resource as healthy 70.20. stack resource metadata Show resource metadata Usage: Table 70.98. Positional arguments Value Summary <stack> Stack to display (name or id) <resource> Name of the resource to show the metadata for Table 70.99. Command arguments Value Summary -h, --help Show this help message and exit Table 70.100. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to json -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.101. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.102. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.103. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.21. stack resource show Display stack resource. Usage: Table 70.104. Positional arguments Value Summary <stack> Name or id of stack to query <resource> Name of resource Table 70.105. Command arguments Value Summary -h, --help Show this help message and exit --with-attr <attribute> Attribute to show, can be specified multiple times Table 70.106. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.108. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.22. stack resource signal Signal a resource with optional data. Usage: Table 70.110. Positional arguments Value Summary <stack> Name or id of stack the resource belongs to <resource> Name of the resoure to signal Table 70.111. Command arguments Value Summary -h, --help Show this help message and exit --data <data> Json data to send to the signal handler --data-file <data-file> File containing json data to send to the signal handler 70.23. stack resume Resume a stack. Usage: Table 70.112. Positional arguments Value Summary <stack> Stack(s) to resume (name or id) Table 70.113. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for resume to complete Table 70.114. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.115. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.116. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.117. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.24. stack show Show stack details. Usage: Table 70.118. Positional arguments Value Summary <stack> Stack to display (name or id) Table 70.119. Command arguments Value Summary -h, --help Show this help message and exit --no-resolve-outputs Do not resolve outputs of the stack. Table 70.120. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.121. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.122. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.123. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.25. stack snapshot create Create stack snapshot. Usage: Table 70.124. Positional arguments Value Summary <stack> Name or id of stack Table 70.125. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of snapshot Table 70.126. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.127. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.128. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.129. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.26. stack snapshot delete Delete stack snapshot. Usage: Table 70.130. Positional arguments Value Summary <stack> Name or id of stack <snapshot> Id of stack snapshot Table 70.131. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes) 70.27. stack snapshot list List stack snapshots. Usage: Table 70.132. Positional arguments Value Summary <stack> Name or id of stack containing the snapshots Table 70.133. Command arguments Value Summary -h, --help Show this help message and exit Table 70.134. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.135. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.136. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.137. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.28. stack snapshot restore Restore stack snapshot Usage: Table 70.138. Positional arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to restore Table 70.139. Command arguments Value Summary -h, --help Show this help message and exit 70.29. stack snapshot show Show stack snapshot. Usage: Table 70.140. Positional arguments Value Summary <stack> Name or id of stack containing the snapshot <snapshot> Id of the snapshot to show Table 70.141. Command arguments Value Summary -h, --help Show this help message and exit Table 70.142. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.143. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.144. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.145. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.30. stack suspend Suspend a stack. Usage: Table 70.146. Positional arguments Value Summary <stack> Stack(s) to suspend (name or id) Table 70.147. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for suspend to complete Table 70.148. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 70.149. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 70.150. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.151. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.31. stack template show Display stack template. Usage: Table 70.152. Positional arguments Value Summary <stack> Name or id of stack to query Table 70.153. Command arguments Value Summary -h, --help Show this help message and exit Table 70.154. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.155. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.156. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.157. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 70.32. stack update Update a stack. Usage: Table 70.158. Positional arguments Value Summary <stack> Name or id of stack to update Table 70.159. Command arguments Value Summary -h, --help Show this help message and exit -t <template>, --template <template> Path to the template -s <files-container>, --files-container <files-container> Swift files container name. local files other than root template would be ignored. If other files are not found in swift, heat engine would raise an error. -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --pre-update <resource> Name of a resource to set a pre-update hook to. Resources in nested stacks can be set using slash as a separator: ``nested_stack/another/my_resource``. You can use wildcards to match multiple stacks or resources: ``nested_stack/an*/*_resource``. This can be specified multiple times --timeout <timeout> Stack update timeout in minutes --rollback <value> Set rollback on update failure. value "enabled" sets rollback to enabled. Value "disabled" sets rollback to disabled. Value "keep" uses the value of existing stack to be updated (default) --dry-run Do not actually perform the stack update, but show what would be changed --show-nested Show nested stacks when performing --dry-run --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times --parameter-file <key=file> Parameter values from file used to create the stack. This can be specified multiple times. Parameter value would be the content of the file --existing Re-use the template, parameters and environment of the current stack. If the template argument is omitted then the existing template is used. If no --environment is specified then the existing environment is used. Parameters specified in --parameter will patch over the existing values in the current stack. Parameters omitted will keep the existing values --clear-parameter <parameter> Remove the parameters from the set of parameters of current stack for the stack-update. The default value in the template will be used. This can be specified multiple times --tags <tag1,tag2... > An updated list of tags to associate with the stack --wait Wait until stack goes to update_complete or UPDATE_FAILED --converge Stack update with observe on reality. Table 70.160. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 70.161. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 70.162. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 70.163. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack stack abandon [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>",
"openstack stack adopt [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--timeout <timeout>] [--enable-rollback] [--parameter <key=value>] [--wait] --adopt-file <adopt-file> <stack-name>",
"openstack stack cancel [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] [--no-rollback] <stack> [<stack> ...]",
"openstack stack check [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [-s <files-container>] [--timeout <timeout>] [--pre-create <resource>] [--enable-rollback] [--parameter <key=value>] [--parameter-file <key=file>] [--wait] [--poll SECONDS] [--tags <tag1,tag2...>] [--dry-run] -t <template> <stack-name>",
"openstack stack delete [-h] [-y] [--wait] <stack> [<stack> ...]",
"openstack stack environment show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>",
"openstack stack event list [-h] [-f {csv,json,log,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--resource <resource>] [--filter <key=value>] [--limit <limit>] [--marker <id>] [--nested-depth <depth>] [--sort <key>[:<direction>]] [--follow] <stack>",
"openstack stack event show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource> <event>",
"openstack stack export [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--output-file <output-file>] <stack>",
"openstack stack failures list [-h] [--long] <stack>",
"openstack stack file list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <NAME or ID>",
"openstack stack hook clear [-h] [--pre-create] [--pre-update] [--pre-delete] <stack> <resource> [<resource> ...]",
"openstack stack hook poll [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--nested-depth <nested-depth>] <stack>",
"openstack stack list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--deleted] [--nested] [--hidden] [--property <key=value>] [--tags <tag1,tag2...>] [--tag-mode <mode>] [--limit <limit>] [--marker <id>] [--sort <key>[:<direction>]] [--all-projects] [--short] [--long]",
"openstack stack output list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <stack>",
"openstack stack output show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all] <stack> [<output>]",
"openstack stack resource list [-h] [-f {csv,dot,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [-n <nested-depth>] [--filter <key=value>] <stack>",
"openstack stack resource mark unhealthy [-h] [--reset] <stack> <resource> [reason]",
"openstack stack resource metadata [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <resource>",
"openstack stack resource show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--with-attr <attribute>] <stack> <resource>",
"openstack stack resource signal [-h] [--data <data>] [--data-file <data-file>] <stack> <resource>",
"openstack stack resume [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--no-resolve-outputs] <stack>",
"openstack stack snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] <stack>",
"openstack stack snapshot delete [-h] [-y] <stack> <snapshot>",
"openstack stack snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <stack>",
"openstack stack snapshot restore [-h] <stack> <snapshot>",
"openstack stack snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack> <snapshot>",
"openstack stack suspend [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--wait] <stack> [<stack> ...]",
"openstack stack template show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <stack>",
"openstack stack update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-t <template>] [-s <files-container>] [-e <environment>] [--pre-update <resource>] [--timeout <timeout>] [--rollback <value>] [--dry-run] [--show-nested] [--parameter <key=value>] [--parameter-file <key=file>] [--existing] [--clear-parameter <parameter>] [--tags <tag1,tag2...>] [--wait] [--converge] <stack>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/stack |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements This section lists new features and feature enhancements available in Red Hat Trusted Application Pipeline 1.4. You can now install RHTAP using a single container image Starting from RHTAP 1.4, we recommend that you install RHTAP via the rhtap-cli container image available through Red Hat Ecosystem Catalog . Compared to the binary-based installation, using the RHTAP container image simplifies the installation process and is fully supported on all operating systems. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/release_notes_for_red_hat_trusted_application_pipeline_1.4/new-features-enhancements_default |
Chapter 2. Key features | Chapter 2. Key features An open standard protocol - AMQP 1.0 Industry-standard APIs - JMS 1.1 and 2.0 New event-driven APIs for fast, efficient messaging Adaptors for integrating with other platforms and components Broad language support - C++, Java, JavaScript, Python, Ruby, and .NET Wide availability - Linux, Windows, and JVM-based environments | null | https://docs.redhat.com/en/documentation/red_hat_amq_clients/2023.q4/html/amq_clients_overview/key_features |
Chapter 223. OPC UA Server Component | Chapter 223. OPC UA Server Component Available as of Camel version 2.19 The Milo Server component provides an OPC UA server using the Eclipse MiloTM implementation. Java 8 : This component requires Java 8 at runtime. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-milo</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Messages sent to the endpoint from Camel will be available from the OPC UA server to OPC UA Clients. Value write requests from OPC UA Client will trigger messages which are sent into Apache Camel. The OPC UA Server component supports 20 options, which are listed below. Name Description Default Type namespaceUri (common) The URI of the namespace, defaults to urn:org:apache:camel String applicationName (common) The application name String applicationUri (common) The application URI String productUri (common) The product URI String bindPort (common) The TCP port the server binds to int strictEndpointUrls Enabled (common) Set whether strict endpoint URLs are enforced false boolean serverName (common) Server name String hostname (common) Server hostname String securityPolicies (common) Security policies Set securityPoliciesById (common) Security policies by URI or name Collection userAuthentication Credentials (common) Set user password combinations in the form of user1:pwd1,user2:pwd2 Usernames and passwords will be URL decoded String enableAnonymous Authentication (common) Enable anonymous authentication, disabled by default false boolean usernameSecurityPolicy Uri (common) Set the UserTokenPolicy used when SecurityPolicy bindAddresses (common) Set the addresses of the local addresses the server should bind to String buildInfo (common) Server build info BuildInfo serverCertificate (common) Server certificate Result certificateManager (common) Server certificate manager CertificateManager certificateValidator (common) Validator for client certificates Supplier defaultCertificate Validator (common) Validator for client certificates using default file based approach File resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 223.1. URI format milo-server:itemId[?options] 223.2. URI options The OPC UA Server endpoint is configured using URI syntax: with the following path and query parameters: 223.2.1. Path Parameters (1 parameters): Name Description Default Type itemId Required ID of the item String 223.2.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 223.3. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.component.milo-server.application-name The application name String camel.component.milo-server.application-uri The application URI String camel.component.milo-server.bind-addresses Set the addresses of the local addresses the server should bind to String camel.component.milo-server.bind-port The TCP port the server binds to Integer camel.component.milo-server.build-info Server build info. The option is a org.eclipse.milo.opcua.stack.core.types.structured.BuildInfo type. String camel.component.milo-server.certificate-manager Server certificate manager. The option is a org.eclipse.milo.opcua.stack.core.application.CertificateManager type. String camel.component.milo-server.certificate-validator Validator for client certificates. The option is a java.util.function.Supplier <org.eclipse.milo.opcua.stack.core.application.CertificateValidator> type. String camel.component.milo-server.default-certificate-validator Validator for client certificates using default file based approach File camel.component.milo-server.enable-anonymous-authentication Enable anonymous authentication, disabled by default false Boolean camel.component.milo-server.enabled Enable milo-server component true Boolean camel.component.milo-server.hostname Server hostname String camel.component.milo-server.namespace-uri The URI of the namespace, defaults to urn:org:apache:camel String camel.component.milo-server.product-uri The product URI String camel.component.milo-server.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.milo-server.security-policies Security policies Set camel.component.milo-server.security-policies-by-id Security policies by URI or name Collection camel.component.milo-server.server-certificate Server certificate. The option is a org.apache.camel.component.milo.KeyStoreLoader.Result type. String camel.component.milo-server.server-name Server name String camel.component.milo-server.strict-endpoint-urls-enabled Set whether strict endpoint URLs are enforced false Boolean camel.component.milo-server.user-authentication-credentials Set user password combinations in the form of user1:pwd1,user2:pwd2 Usernames and passwords will be URL decoded String camel.component.milo-server.username-security-policy-uri Set the UserTokenPolicy used when SecurityPolicy 223.4. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-milo</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"milo-server:itemId[?options]",
"milo-server:itemId"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/milo-server-component |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/proc-providing-feedback-on-redhat-documentation |
Chapter 19. Users and Roles | Chapter 19. Users and Roles 19.1. Introduction to Users In Red Hat Virtualization, there are two types of user domains: local domain and external domain. A default local domain called the internal domain and a default user admin is created during the the Manager installation process. You can create additional users on the internal domain using ovirt-aaa-jdbc-tool . User accounts created on local domains are known as local users. You can also attach external directory servers such as Red Hat Directory Server, Active Directory, OpenLDAP, and many other supported options to your Red Hat Virtualization environment and use them as external domains. User accounts created on external domains are known as directory users. Both local users and directory users need to be assigned with appropriate roles and permissions through the Administration Portal before they can function in the environment. There are two main types of user roles: end user and administrator. An end user role uses and manages virtual resources from the VM Portal. An administrator role maintains the system infrastructure using the Administration Portal. The roles can be assigned to the users for individual resources like virtual machines and hosts, or on a hierarchy of objects like clusters and data centers. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-users_and_roles |
Chapter 2. Enhancements | Chapter 2. Enhancements The enhancements added in this release are outlined below. 2.1. Kafka 2.7.0 enhancements For an overview of the enhancements introduced with Kafka 2.7.0, refer to the Kafka 2.7.0 Release Notes . 2.2. OAuth 2.0 authentication and authorization This release includes the following enhancements to OAuth 2.0 token-based authentication and authorization. Checks on JWT access tokens You can now configure two additional checks on JWT access tokens. Both of these checks are configured in the OAuth 2.0 authentication listener configuration. Custom claim checks Custom claim checks impose custom rules on the validation of JWT access tokens by Kafka brokers. They are defined using JsonPath filter queries. If an access token does not contain the necessary data, it is rejected. When using introspection endpoint token validation, the custom check is applied to the introspection endpoint response JSON. To configure custom claim checks, add the oauth.custom.claim.check option to the server.properties file and define a JsonPath filter query. Custom claim checks are disabled by default. See Configuring OAuth 2.0 support for Kafka brokers Audience checks Your authorization server might provide aud (audience) claims in JWT access tokens. When audience checks are enabled, the Kafka broker rejects tokens that do not contain the broker's clientId in their aud claims. To enable audience checks, set the oauth.check.audience option to true . Audience checks are disabled by default. See Configuring OAuth 2.0 support for Kafka brokers Support for OAuth 2.0 over SASL PLAIN authentication You can now configure the PLAIN mechanism for OAuth 2.0 authentication between Kafka clients and Kafka brokers. Previously, the only supported authentication mechanism was OAUTHBEARER. PLAIN is a simple authentication mechanism supported by all Kafka client tools (including developer tools such as kafkacat). AMQ Streams includes server-side callbacks that enable PLAIN to be used with OAuth 2.0 authentication. These capabilities are referred to as OAuth 2.0 over PLAIN . Note Red Hat recommends using OAUTHBEARER authentication for clients whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER. When used with the provided OAuth 2.0 over PLAIN callbacks, Kafka clients can authenticate with Kafka brokers using either of the following methods: Client ID and secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time To use PLAIN, you must enable it in the server.properties file, in the OAuth authentication listener configuration. See OAuth 2.0 authentication mechanisms and Configuring OAuth 2.0 support for Kafka brokers | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/release_notes_for_amq_streams_1.7_on_rhel/enhancements-str |
Installing on Azure | Installing on Azure OpenShift Container Platform 4.18 Installing OpenShift Container Platform on Azure Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure/index |
Installing with an RPM package | Installing with an RPM package Red Hat build of MicroShift 4.18 Installing MicroShift with RPMs Red Hat OpenShift Documentation Team | [
"sudo vgs",
"VG #PV #LV #SN Attr VSize VFree rhel 1 2 0 wz--n- <127.00g 54.94g",
"sudo subscription-manager repos --enable rhocp-4.18-for-rhel-9-USD(uname -m)-rpms --enable fast-datapath-for-rhel-9-USD(uname -m)-rpms",
"sudo subscription-manager repos --enable rhel-9-for-USD(uname -m)-appstream-eus-rpms --enable rhel-9-for-USD(uname -m)-baseos-eus-rpms",
"sudo subscription-manager release --set=9.4",
"sudo dnf install -y microshift",
"sudo cp USDHOME/openshift-pull-secret /etc/crio/openshift-pull-secret",
"sudo chown root:root /etc/crio/openshift-pull-secret",
"sudo chmod 600 /etc/crio/openshift-pull-secret",
"sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16",
"sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1",
"sudo firewall-cmd --reload",
"sudo systemctl start microshift",
"sudo systemctl enable microshift",
"sudo systemctl disable microshift",
"sudo systemctl stop microshift",
"sudo crictl ps -a",
"sudo systemctl stop kubepods.slice",
"mkdir -p ~/.kube/",
"sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config",
"chmod go-r ~/.kube/config",
"oc get all -A",
"[user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload",
"[user@microshift]USD oc get all -A",
"[user@workstation]USD mkdir -p ~/.kube/",
"[user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>",
"[user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE \"sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig\" > ~/.kube/config",
"chmod go-r ~/.kube/config",
"[user@workstation]USD oc get all -A"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/installing_with_an_rpm_package/index |
14.8.2. Suspending a Guest Virtual Machine | 14.8.2. Suspending a Guest Virtual Machine Suspend a guest virtual machine with virsh : When a guest virtual machine is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest virtual machine is suspended. This operation is immediate and the guest virtual machine can be restarted with the resume ( Section 14.8.6, "Resuming a Guest Virtual Machine" ) option. | [
"virsh suspend {domain-id, domain-name or domain-uuid}"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-suspending_a_guest_virtual_machine |
Appendix F. Additional Replication High Availability Configuration Elements | Appendix F. Additional Replication High Availability Configuration Elements The following table lists additional ha-policy configuration elements that are not described in the Configuring replication high availability section. These elements have default settings that are sufficient for most common use cases. Table F.1. Additional Configuration Elements Available when Using Replication High Availability Name Used in Description check-for-live-server Embedded broker coordination Applies only to brokers configured as master brokers. Specifies whether the original master broker checks the cluster for another live broker using its own server ID when starting up. Set to true to fail back to the original master broker and avoid a "split brain" situation in which two brokers become live at the same time. The default value of this property is false . cluster-name Embedded broker and ZooKeeper coordination Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured, the cluster configuration with this name will be used when connecting to the cluster. If unset, the first cluster connection defined in the configuration is used. initial-replication-sync-timeout Embedded broker and ZooKeeper coordination The amount of time the replicating broker will wait upon completion of the initial replication process for the replica to acknowledge that it has received all the necessary data. The default value of this property is 30,000 milliseconds. NOTE: During this interval, any other journal-related operations are blocked. max-saved-replicated-journals-size Embedded broker and ZooKeeper coordination Applies to backup brokers only. Specifies how many backup journal files the backup broker retains. Once this value has been reached, the broker makes space for each new backup journal file by deleting the oldest journal file. The default value of this property is 2 . | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/replication_elements |
Chapter 22. Directory-entry (dentry) Tapset | Chapter 22. Directory-entry (dentry) Tapset This family of functions is used to map kernel VFS directory entry pointers to file or full path names. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/dentry-dot-stp |
Chapter 12. Using a service account as an OAuth client | Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior: | [
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/using-service-accounts-as-oauth-client |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_cpp_client_guide/rhdg-docs_datagrid |
19.3.4. Mail Transport Agent (MTA) Configuration | 19.3.4. Mail Transport Agent (MTA) Configuration A Mail Transport Agent (MTA) is essential for sending email. A Mail User Agent (MUA) such as Evolution , Thunderbird , or Mutt , is used to read and compose email. When a user sends an email from an MUA, the message is handed off to the MTA, which sends the message through a series of MTAs until it reaches its destination. Even if a user does not plan to send email from the system, some automated tasks or system programs might use the /bin/mail command to send email containing log messages to the root user of the local system. Red Hat Enterprise Linux 6 provides two MTAs: Postfix and Sendmail. If both are installed, Postfix is the default MTA. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-switchmail |
Chapter 1. January 2025 | Chapter 1. January 2025 1.1. Basic authentication is no longer supported If you manually configured the cost management metrics operator to use basic authentication, you need to set up service account authentication. For more information, see Configuring service account authentication for the cost management metrics operator and Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts . | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/whats_new_in_cost_management/january_2025 |
Chapter 9. StorageVersionMigration [migration.k8s.io/v1alpha1] | Chapter 9. StorageVersionMigration [migration.k8s.io/v1alpha1] Description StorageVersionMigration represents a migration of stored data to the latest storage version. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the migration. status object Status of the migration. 9.1.1. .spec Description Specification of the migration. Type object Required resource Property Type Description continueToken string The token used in the list options to get the chunk of objects to migrate. When the .status.conditions indicates the migration is "Running", users can use this token to check the progress of the migration. resource object The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. 9.1.2. .spec.resource Description The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. Type object Property Type Description group string The name of the group. resource string The name of the resource. version string The name of the version. 9.1.3. .status Description Status of the migration. Type object Property Type Description conditions array The latest available observations of the migration's current state. conditions[] object Describes the state of a migration at a certain point. 9.1.4. .status.conditions Description The latest available observations of the migration's current state. Type array 9.1.5. .status.conditions[] Description Describes the state of a migration at a certain point. Type object Required status type Property Type Description lastUpdateTime string The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of the condition. 9.2. API endpoints The following API endpoints are available: /apis/migration.k8s.io/v1alpha1/storageversionmigrations DELETE : delete collection of StorageVersionMigration GET : list objects of kind StorageVersionMigration POST : create a StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} DELETE : delete a StorageVersionMigration GET : read the specified StorageVersionMigration PATCH : partially update the specified StorageVersionMigration PUT : replace the specified StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status GET : read status of the specified StorageVersionMigration PATCH : partially update status of the specified StorageVersionMigration PUT : replace status of the specified StorageVersionMigration 9.2.1. /apis/migration.k8s.io/v1alpha1/storageversionmigrations HTTP method DELETE Description delete collection of StorageVersionMigration Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind StorageVersionMigration Table 9.2. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigrationList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageVersionMigration Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 202 - Accepted StorageVersionMigration schema 401 - Unauthorized Empty 9.2.2. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the StorageVersionMigration HTTP method DELETE Description delete a StorageVersionMigration Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageVersionMigration Table 9.9. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageVersionMigration Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageVersionMigration Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty 9.2.3. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status Table 9.15. Global path parameters Parameter Type Description name string name of the StorageVersionMigration HTTP method GET Description read status of the specified StorageVersionMigration Table 9.16. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StorageVersionMigration Table 9.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.18. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StorageVersionMigration Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body StorageVersionMigration schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage_apis/storageversionmigration-migration-k8s-io-v1alpha1 |
Chapter 7. Job slicing | Chapter 7. Job slicing A sliced job refers to the concept of a distributed job. Use distributed jobs for running a job across a large number of hosts. You can then run many ansible-playbooks, each on a subset of an inventory that you can be schedule in parallel across a cluster. By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration, job slicing takes advantage of automation controller's ability to distribute work to many nodes in a cluster. Job slicing works by adding a Job Template field job_slice_count , which specifies the number of jobs into which to slice the Ansible run. When this number is greater than 1 , automation controller generates a workflow from a job template instead of a job. The inventory is distributed evenly among the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow. When launching a job, the API returns either a job resource (if job_slice_count = 1) or a workflow job resource. The corresponding User Interface (UI) redirects to the appropriate screen to display the status of the run. 7.1. Job slice considerations When setting up job slices, consider the following: A sliced job creates a workflow job, which then creates jobs. A job slice consists of a job template, an inventory, and a slice count. When executed, a sliced job splits each inventory into several "slice size" chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a shortened version of the original inventory that only has the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run: These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping). Note Job slicing is intended to scale job executions horizontally. Enabling job slicing on a job template divides an inventory to be acted upon in the number of slices configured at launch time and then starts a job for each slice. Normally, the number of slices is equal to or less than the number of automation controller nodes. You can set an extremely high number of job slices but it can cause performance degradation. The job scheduler is not designed to simultaneously schedule thousands of workflow nodes, which are what the sliced jobs become. Sliced job templates with prompts or extra variables behave the same as standard job templates, applying all variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit to a sliced job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job to fail. A job slice job status of a distributed job is calculated in the same manner as workflow jobs. It fails if there are any unhandled failures in its sub-jobs. Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts) must not be configured as a slice job. Any job that does, can fail, and automation controller does not try to discover or account for playbooks that fail when run as slice jobs. 7.2. Job slice execution behavior When jobs are sliced, they can run on any node. Insufficient capacity in the system can cause some to run at a different time. When slice jobs are running, job details display the workflow and job slices currently running, and a link to view their details individually. By default, job templates are not normally configured to execute simultaneously (you must check allow_simultaneous in the API or Concurrent jobs in the UI). Slicing overrides this behavior and implies allow_simultaneous even if that setting is clear. See Job templates for information about how to specify this, and the number of job slices on your job template configuration. The Job templates section provides additional detail on performing the following operations in the UI: Launch workflow jobs with a job template that has a slice number greater than one. Cancel the whole workflow or individual jobs after launching a slice job template. Relaunch the whole workflow or individual jobs after slice jobs finish running. View the details about the workflow and slice jobs after launching a job template. Search slice jobs specifically after you create them, according to the section, "Searching job slices"). 7.3. Searching job slices To make it easier to find slice jobs, use the search functionality to apply a search filter to: Job lists to show only slice jobs Job lists to show only parent workflow jobs of job slices Job template lists to only show job templates that produce slice jobs Procedure Search for slice jobs by using one of the following methods: To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or unified_jobs : /api/v2/jobs/?job_slice_count__gt=1 To show only parent workflow jobs of job slices: /api/v2/workflow_jobs/?job_template__isnull=false To show only job templates that produce slice jobs: /api/v2/job_templates/?job_slice_count__gt=1 | [
"/api/v2/jobs/?job_slice_count__gt=1",
"/api/v2/workflow_jobs/?job_template__isnull=false",
"/api/v2/job_templates/?job_slice_count__gt=1"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-job-slicing |
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices | Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . As of OpenShift Data Foundation version 4.12, you can choose gp2-csi or gp3-csi as the storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_amazon_web_services/deploy-using-dynamic-storage-devices-aws |
Chapter 11. Reviewing monitoring dashboards | Chapter 11. Reviewing monitoring dashboards OpenShift Container Platform 4.12 provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. Use the Administrator perspective to access dashboards for the core OpenShift Container Platform components, including the following items: API performance etcd Kubernetes compute resources Kubernetes network resources Prometheus USE method dashboards relating to cluster and node performance Figure 11.1. Example dashboard in the Administrator perspective Use the Developer perspective to access Kubernetes compute resources dashboards that provide the following application metrics for a selected project: CPU usage Memory usage Bandwidth information Packet rate information Figure 11.2. Example dashboard in the Developer perspective Note In the Developer perspective, you can view dashboards for only one project at a time. 11.1. Reviewing monitoring dashboards as a cluster administrator In the Administrator perspective, you can view dashboards relating to core OpenShift Container Platform cluster components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as etcd and Prometheus dashboards, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. 11.2. Reviewing monitoring dashboards as a developer Use the Developer perspective to view Kubernetes compute resources dashboards of a selected project. Prerequisites You have access to the cluster as a developer or as a user. You have view permissions for the project that you are viewing the dashboard for. Procedure In the Developer perspective in the OpenShift Container Platform web console, navigate to Observe Dashboard . Select a project from the Project: drop-down list. Select a dashboard from the Dashboard drop-down list to see the filtered metrics. Note All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources Monitoring project and application metrics using the Developer perspective | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/reviewing-monitoring-dashboards |
Chapter 1. Overview of machine management | Chapter 1. Overview of machine management You can use machine management to flexibly work with underlying infrastructure such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere to manage the OpenShift Container Platform cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies. It is important to have a cluster that adapts to changing workloads. The OpenShift Container Platform cluster can horizontally scale up and down when the load increases or decreases. Machine management is implemented as a custom resource definition (CRD). A CRD object defines a new unique object Kind in the cluster and enables the Kubernetes API server to handle the object's entire lifecycle. The Machine API Operator provisions the following resources: MachineSet Machine ClusterAutoscaler MachineAutoscaler MachineHealthCheck 1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.18 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.18 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. Additional resources Machine phases and lifecycle 1.2. Managing compute machines As a cluster administrator, you can perform the following actions: Create a compute machine set for the following cloud providers: AWS Azure Azure Stack Hub GCP IBM Cloud IBM Power Virtual Server Nutanix RHOSP vSphere Create a machine set for a bare metal deployment: Creating a compute machine set on bare metal Manually scale a compute machine set by adding or removing a machine from the compute machine set. Modify a compute machine set through the MachineSet YAML configuration file. Delete a machine. Create infrastructure compute machine sets . Configure and deploy a machine health check to automatically fix damaged machines in a machine pool. 1.3. Managing control plane machines As a cluster administrator, you can perform the following actions: Update your control plane configuration with a control plane machine set for the following cloud providers: Amazon Web Services Google Cloud Platform Microsoft Azure Nutanix Red Hat OpenStack Platform (RHOSP) VMware vSphere Configure and deploy a machine health check to automatically recover unhealthy control plane machines. 1.4. Applying autoscaling to an OpenShift Container Platform cluster You can automatically scale your OpenShift Container Platform cluster to ensure flexibility for changing workloads. To autoscale your cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each compute machine set. The cluster autoscaler increases and decreases the size of the cluster based on deployment needs. The machine autoscaler adjusts the number of machines in the compute machine sets that you deploy in your OpenShift Container Platform cluster. 1.5. Adding compute machines on user-provisioned infrastructure User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the OpenShift Container Platform. You can add compute machines to a cluster on user-provisioned infrastructure during or after the installation process. 1.6. Adding RHEL compute machines to your cluster As a cluster administrator, you can perform the following actions: Add Red Hat Enterprise Linux (RHEL) compute machines , also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster. Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/overview-of-machine-management |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Power clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on IBM Power. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal-Attached Devices mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode using Red Hat Ceph Storage External mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_power/preface-ibm-power-systems |
Chapter 4. Configuring Streams for Apache Kafka Proxy | Chapter 4. Configuring Streams for Apache Kafka Proxy Fine-tune your deployment by configuring Streams for Apache Kafka Proxy resources to include additional features according to your specific requirements. 4.1. Example Streams for Apache Kafka Proxy configuration Streams for Apache Kafka Proxy configuration is defined in a ConfigMap resource. Use the data properties of the ConfigMap resource to configure the following: Virtual clusters that represent the Kafka clusters Network addresses for broker communication in a Kafka cluster Filters to introduce additional functionality to the Kafka deployment In this example, configuration for the Record Encryption filter is shown. Example Streams for Apache Kafka Proxy configuration apiVersion: v1 kind: ConfigMap metadata: name: proxy-config data: config.yaml: | adminHttp: 1 endpoints: prometheus: {} virtualClusters: 2 my-cluster-proxy: 3 targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 4 tls: 5 trust: storeFile: /opt/proxy/trust/ca.p12 storePassword: passwordFile: /opt/proxy/trust/ca.password clusterNetworkAddressConfigProvider: 6 type: SniRoutingClusterNetworkAddressConfigProvider Config: bootstrapAddress: mycluster-proxy.kafka:9092 brokerAddressPattern: brokerUSD(nodeId).mycluster-proxy.kafka logNetwork: false 7 logFrames: false tls: 8 key: storeFile: /opt/proxy/server/key-material/keystore.p12 storePassword: passwordFile: /opt/proxy/server/keystore-password/storePassword filters: 9 - type: EnvelopeEncryption 10 config: 11 kms: VaultKmsService kmsConfig: vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit vaultToken: passwordFile: /opt/proxy/server/token.txt tls: 12 key: storeFile: /opt/cert/server.p12 storePassword: passwordFile: /opt/cert/store.password keyPassword: passwordFile: /opt/cert/key.password storeType: PKCS12 selector: TemplateKekSelector selectorConfig: template: "USD{topicName}" 1 Enables metrics for the proxy. 2 Virtual cluster configuration. 3 The name of the virtual cluster. 4 The bootstrap address of the target physical Kafka Cluster being proxied. 5 TLS configuration for the connection to the target cluster. 6 The configuration for the cluster network address configuration provider that controls how the virtual cluster is presented to the network. 7 Logging is disabled by default. Enable logging related to network activity ( logNetwork ) and messages ( logFrames ) by setting the logging properties to true . 8 TLS encryption for securing connections with the clients. 9 Filter configuration. 10 The type of filter, which is the Record Encryption filter in this example. 11 The configuration specific to the type of filter. 12 The Record Encryption filter requires a connection to Vault. If required, you can also specify the credentials for TLS authentication with Vault, with key names under which TLS certificates are stored. 4.2. Configuring virtual clusters A Kafka cluster is represented by the proxy as a virtual cluster. Clients connect to the virtual cluster rather than the actual cluster. When Streams for Apache Kafka Proxy is deployed, it includes configuration to create virtual clusters. A virtual cluster has exactly one target cluster, but many virtual clusters can target the same cluster. Each virtual cluster targets a single listener on the target cluster, so multiple listeners on the Kafka side are represented as multiple virtual clusters by the proxy. Clients connect to a virtual cluster using a bootstrap_servers address. The virtual cluster has a bootstrap address that maps to each broker in the target cluster. When a client connects to the proxy, communication is proxied to the target broker by rewriting the address. Responses back to clients are rewritten to reflect the appropriate network addresses of the virtual clusters. You can secure virtual cluster connections from clients and to target clusters. Streams for Apache Kafka Proxy accepts keys and certificates in PEM (Privacy Enhanced Mail), PKCS #12 (Public-Key Cryptography Standards), or JKS (Java KeyStore) keystore format. 4.2.1. Securing connections from clients To secure client connections to virtual clusters, configure TLS on the virtual cluster by doing the following: Obtain a CA (Certificate Authority) certificate for the virtual cluster from a Certificate Authority. When requesting the certificate, ensure it matches the names of the virtual cluster's bootstrap and broker addresses. This might require wildcard certificates and Subject Alternative Names (SANs). Specify TLS credentials in the virtual cluster configuration using tls properties. Example PKCS #12 configuration virtualClusters: my-cluster-proxy: tls: key: storeFile: <path>/server.p12 1 storePassword: passwordFile: <path>/store.password 2 keyPassword: passwordFile: <path>/key.password 3 storeType: PKCS12 4 # ... 1 PKCS #12 store for the public CA certificate of the virtual cluster. 2 Password to protect the PKCS #12 store. 3 (Optional) Password for the key. If a password is not specified, the keystore's password is used to decrypt the key too. 4 (Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used. Note TLS is recommended on Kafka clients and virtual clusters for production configurations. Example PEM configuration virtualClusters: my-cluster-proxy: tls: key: privateKeyFile: <path>/server.key 1 certificateFile: <path>/server.crt 2 keyPassword: passwordFile: <path>/key.password 3 # ... 1 Private key of the virtual cluster. 2 Public CA certificate of the virtual cluster. 3 (Optional) Password for the key. If required, configure the insecure property to disable trust and establish insecure connections with any Kafka Cluster, irrespective of certificate validity. However, this option is not recommended for production use. Example to enable insecure TLS virtualClusters: demo: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true 1 #... # ... 1 Enables insecure TLS. 4.2.2. Securing connections to target clusters To secure a virtual cluster connection to a target cluster, configure TLS on the virtual cluster. The target cluster must already be configured to use TLS. Specify TLS for the virtual cluster configuration using targetCluster.tls properties Use an empty object ( {} ) to inherit trust from the OpenShift platform. This option is suitable if the target cluster is using a TLS certificate signed by a public CA. Example target cluster configuration for TLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: {} #... If it is using a TLS certificate signed by a private CA, you must add truststore configuration for the target cluster. Example truststore configuration for a target cluster virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: trust: storeFile: <path>/trust.p12 1 storePassword: passwordFile: <path>/store.password 2 storeType: PKCS12 3 #... 1 PKCS #12 store for the public CA certificate of the Kafka cluster. 2 Password to access the public Kafka cluster CA certificate. 3 (Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used. For mTLS, you can add keystore configuration for the virtual cluster too. Example keystore and truststore configuration for mTLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092 tls: key: privateKeyFile: <path>/client.key 1 certificateFile: <path>/client.crt 2 trust: storeFile: <path>/server.crt storeType: PEM # ... 1 Private key of the virtual cluster. 2 Public CA certificate of the virtual cluster. For the purposes of testing outside of a production environment, you can set the insecure property to true to turn off TLS so that the Streams for Apache Kafka Proxy can connect to any Kafka cluster. Example configuration to turn off TLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true #... 4.3. Configuring network addresses Virtual cluster configuration requires a network address configuration provider that manages network communication and provides broker address information to clients. Streams for Apache Kafka Proxy has two built-in providers: Broker address provider ( PortPerBrokerClusterNetworkAddressConfigProvider ) The per-broker network address configuration provider opens one port for a virtual cluster's bootstrap address and one port for each broker in the target Kafka cluster. The ports are maintained dynamically. For example, if a broker is removed from the cluster, the port assigned to it is closed. SNI routing address provider ( SniRoutingClusterNetworkAddressConfigProvider ) The SNI routing provider opens a single port for all virtual clusters or a port for each. For the Kafka cluster, you can open a port for the whole cluster or each broker. The SNI routing provider uses SNI information to determine where to route the traffic. Example broker address provider configuration clusterNetworkAddressConfigProvider: type: PortPerBrokerClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com 2 brokerStartPort: 9193 3 numberOfBrokerPorts: 3 4 bindAddress: 192.168.0.1 5 1 The hostname and port of the bootstrap address used by Kafka clients. 2 (Optional) The broker address pattern used to form broker addresses. If not defined, it defaults to the hostname part of the bootstrap address and the port number allocated to the broker. The USD(nodeId) token is replaced by the broker's node.id (or broker.id if node.id is not set). 3 (Optional) The starting number for broker port range. Defaults to the port of the bootstrap address plus 1. 4 (Optional) The maximum number of broker ports that are permitted. Defaults to 3. 5 (Optional) The bind address used when binding the ports. If undefined, all network interfaces are bound. The example broker address configuration creates the following broker addresses: mybroker-0.mycluster.kafka.com:9193 mybroker-1.mycluster.kafka.com:9194 mybroker-2.mycluster.kafka.com:9194 Note For a configuration with multiple physical clusters, ensure that the numberOfBrokerPorts is set to (number of brokers * number of listeners per broker) + number of bootstrap listeners across all clusters. For instance, if there are two physical clusters with 3 nodes each, and each broker has one listener, the configuration requires a value of 8 (comprising 3 ports for broker listeners + 1 port for the bootstrap listener in each cluster). Example SNI routing address provider configuration clusterNetworkAddressConfigProvider: type: SniRoutingClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com bindAddress: 192.168.0.1 1 A Single address for all traffic, including bootstrap address and brokers. In the SNI routing address configuration, the brokerAddressPattern specification is mandatory, as it is required to generate routes for each broker. | [
"apiVersion: v1 kind: ConfigMap metadata: name: proxy-config data: config.yaml: | adminHttp: 1 endpoints: prometheus: {} virtualClusters: 2 my-cluster-proxy: 3 targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 4 tls: 5 trust: storeFile: /opt/proxy/trust/ca.p12 storePassword: passwordFile: /opt/proxy/trust/ca.password clusterNetworkAddressConfigProvider: 6 type: SniRoutingClusterNetworkAddressConfigProvider Config: bootstrapAddress: mycluster-proxy.kafka:9092 brokerAddressPattern: brokerUSD(nodeId).mycluster-proxy.kafka logNetwork: false 7 logFrames: false tls: 8 key: storeFile: /opt/proxy/server/key-material/keystore.p12 storePassword: passwordFile: /opt/proxy/server/keystore-password/storePassword filters: 9 - type: EnvelopeEncryption 10 config: 11 kms: VaultKmsService kmsConfig: vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit vaultToken: passwordFile: /opt/proxy/server/token.txt tls: 12 key: storeFile: /opt/cert/server.p12 storePassword: passwordFile: /opt/cert/store.password keyPassword: passwordFile: /opt/cert/key.password storeType: PKCS12 selector: TemplateKekSelector selectorConfig: template: \"USD{topicName}\"",
"virtualClusters: my-cluster-proxy: tls: key: storeFile: <path>/server.p12 1 storePassword: passwordFile: <path>/store.password 2 keyPassword: passwordFile: <path>/key.password 3 storeType: PKCS12 4 #",
"virtualClusters: my-cluster-proxy: tls: key: privateKeyFile: <path>/server.key 1 certificateFile: <path>/server.crt 2 keyPassword: passwordFile: <path>/key.password 3 ...",
"virtualClusters: demo: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true 1 # ...",
"virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: {} #",
"virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: trust: storeFile: <path>/trust.p12 1 storePassword: passwordFile: <path>/store.password 2 storeType: PKCS12 3 #",
"virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092 tls: key: privateKeyFile: <path>/client.key 1 certificateFile: <path>/client.crt 2 trust: storeFile: <path>/server.crt storeType: PEM",
"virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true #",
"clusterNetworkAddressConfigProvider: type: PortPerBrokerClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com 2 brokerStartPort: 9193 3 numberOfBrokerPorts: 3 4 bindAddress: 192.168.0.1 5",
"mybroker-0.mycluster.kafka.com:9193 mybroker-1.mycluster.kafka.com:9194 mybroker-2.mycluster.kafka.com:9194",
"clusterNetworkAddressConfigProvider: type: SniRoutingClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com bindAddress: 192.168.0.1"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_proxy/assembly-configuring-proxy-str |
33.6. Authentication | 33.6. Authentication Figure 33.9. Authentication In the Authentication section, select whether to use shadow passwords and MD5 encryption for user passwords. These options are highly recommended and chosen by default. The Authentication Configuration options allow you to configure the following methods of authentication: NIS LDAP Kerberos 5 Hesiod SMB Name Switch Cache These methods are not enabled by default. To enable one or more of these methods, click the appropriate tab, click the checkbox to Enable , and enter the appropriate information for the authentication method. Refer to the Red Hat Enterprise Linux Deployment Guide for more information about the options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-auth |
Chapter 7. Annobin | Chapter 7. Annobin The Annobin project consists of the annobin plugin and the annockeck program. The annobin plugin scans the GNU Compiler Collection (GCC) command line, the compilation state, and the compilation process, and generates the ELF notes. The ELF notes record how the binary was built and provide information for the annocheck program to perform security hardening checks. The security hardening checker is part of the annocheck program and is enabled by default. It checks the binary files to determine whether the program was built with necessary security hardening options and compiled correctly. annocheck is able to recursively scan directories, archives, and RPM packages for ELF object files. Note The files must be in ELF format. annocheck does not handle any other binary file types. 7.1. Installing Annobin In Red Hat Developer Toolset, the annobin plugin and the annockeck program are provided by the devtoolset-12-gcc package and are installed as described in Section 1.5.3, "Installing Optional Packages" . 7.2. Using Annobin Plugin To pass options to the annobin plugin with gcc , use: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset as as default: 7.3. Using Annocheck To scan files, directories or RPM packages with the annocheck program: Note annocheck only looks for the ELF files. Other file types are ignored. Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset as as default: Note To verify the version of annocheck you are using at any point: Red Hat Developer Toolset's annocheck executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset annocheck : 7.4. Additional Resources For more information about annocheck , annobin and its features, see the resources listed below. Installed Documentation annocheck (1) - The manual page for the annocheck utility provides detailed information on its usage. To display the manual page for the version included in Red Hat Developer Toolset: annobin (1) - The manual page for the annobin utility provides detailed information on its usage. To display the manual page for the version included in Red Hat Developer Toolset: | [
"scl enable devtoolset-12 'gcc -fplugin=annobin -fplugin-arg-annobin- option file-name '",
"scl enable devtoolset-12 'bash'",
"scl enable devtoolset-12 'annocheck file-name '",
"scl enable devtoolset-12 'bash'",
"which annocheck",
"annocheck --version",
"scl enable devtoolset-12 'man annocheck'",
"scl enable devtoolset-12 'man annobin'"
] | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-annobin |
Chapter 7. Postinstallation node tasks | Chapter 7. Postinstallation node tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks. 7.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Understand and work with RHEL compute nodes. 7.1.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.13, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 7.1.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 8.6 and later with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 7.1.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.1.3. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.13 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.13: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.13-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 7.1.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.13: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.13-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 7.1.5. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.13 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 7.1.6. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 7.1.7. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. 7.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal. Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. 7.2.1. Prerequisites You installed a cluster on bare metal. You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . 7.2.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 7.2.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 7.2.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.2.5. Adding a new RHCOS worker node with a custom /var partition in AWS OpenShift Container Platform supports partitioning devices during installation by using machine configs that are processed during the bootstrap. However, if you use /var partitioning, the device name must be determined at installation and cannot be changed. You cannot add different instance types as nodes if they have a different device naming schema. For example, if you configured the /var partition with the default AWS device name for m4.large instances, dev/xvdb , you cannot directly add an AWS m5.large instance, as m5.large instances use a /dev/nvme1n1 device by default. The device might fail to partition due to the different naming schema. The procedure in this section shows how to add a new Red Hat Enterprise Linux CoreOS (RHCOS) compute node with an instance that uses a different device name from what was configured at installation. You create a custom user data secret and configure a new compute machine set. These steps are specific to an AWS cluster. The principles apply to other cloud deployments also. However, the device naming schema is different for other deployments and should be determined on a per-case basis. Procedure On a command line, change to the openshift-machine-api namespace: USD oc project openshift-machine-api Create a new secret from the worker-user-data secret: Export the userData section of the secret to a text file: USD oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt Edit the text file to add the storage , filesystems , and systemd stanzas for the partitions you want to use for the new node. You can specify any Ignition configuration parameters as needed. Note Do not change the values in the ignition stanza. { "ignition": { "config": { "merge": [ { "source": "https:...." } ] }, "security": { "tls": { "certificateAuthorities": [ { "source": "data:text/plain;charset=utf-8;base64,.....==" } ] } }, "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/nvme1n1", 1 "partitions": [ { "label": "var", "sizeMiB": 50000, 2 "startMiB": 0 3 } ] } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var", 4 "format": "xfs", 5 "path": "/var" 6 } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var\nWhat=/dev/disk/by-partlabel/var\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", "enabled": true, "name": "var.mount" } ] } } 1 Specifies an absolute path to the AWS block device. 2 Specifies the size of the data partition in Mebibytes. 3 Specifies the start of the partition in Mebibytes. When adding a data partition to the boot disk, a minimum value of 25000 MB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 4 Specifies an absolute path to the /var partition. 5 Specifies the filesystem format. 6 Specifies the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same. 7 Defines a systemd mount unit that mounts the /dev/disk/by-partlabel/var device to the /var partition. Extract the disableTemplating section from the work-user-data secret to a text file: USD oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt Create the new user data secret file from the two text files. This user data secret passes the additional node partition information in the userData.txt file to the newly created node. USD oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt Create a new compute machine set for the new node: Create a new compute machine set YAML file, similar to the following, which is configured for AWS. Add the required partitions and the newly-created user data secret: Tip Use an existing compute machine set as a template and change the parameters as needed for the new node. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4 1 Specifies a name for the new node. 2 Specifies an absolute path to the AWS block device, here an encrypted EBS volume. 3 Optional. Specifies an additional EBS volume. 4 Specifies the user data secret file. Create the compute machine set: USD oc create -f <file-name>.yaml The machines might take a few moments to become available. Verify that the new partition and nodes are created: Verify that the compute machine set is created: USD oc get machineset Example output NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1 1 This is the new compute machine set. Verify that the new node is created: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.26.0 ip-10-0-146-113.ec2.internal Ready master 127m v1.26.0 ip-10-0-153-35.ec2.internal Ready worker 118m v1.26.0 ip-10-0-176-58.ec2.internal Ready master 126m v1.26.0 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.26.0 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.26.0 ip-10-0-245-59.ec2.internal Ready worker 116m v1.26.0 1 This is new new node. Verify that the custom /var partition is created on the new node: USD oc debug node/<node-name> -- chroot /host lsblk For example: USD oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1 1 The nvme1n1 device is mounted to the /var partition. Additional resources For more information on how OpenShift Container Platform uses disk partitioning, see Disk partitioning . 7.3. Deploying machine health checks Understand and deploy machine health checks. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 7.3.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 7.3.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About control plane machine sets 7.3.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 7.3.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 7.3.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 7.3.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 7.3.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml 7.3.4. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines 7.3.5. Understanding the difference between compute machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 7.4. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 7.4.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . Note If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector must match the name of the custom machine config pool. For example, because the following custom machine config pool is named infra , the custom role must also be infra : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} # ... If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-kubelet-config 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Configure the worker nodes as needed: Create a YAML file similar to the following that contains the kubelet configuration: Important Kubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. For example: Use podPidsLimit to set the maximum number of PIDs in any pod. Use containerLogMaxSize to set the maximum size of the container log file before it is rotated. Use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verification Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-kubelet-config 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-kubelet-config -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 7.4.2. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Add the maxUnavailable field and set the value: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 7.4.3. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.13 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.13, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 7.4.4. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ββinit.scope β ββ1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 ββkubepods.slice ββkubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β ββcrio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β ββ32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 7.5. Huge pages Understand and configure huge pages. 7.5.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. 7.5.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 7.5.3. Configuring huge pages at boot time Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 7.6. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 7.6.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 7.6.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 7.6.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 7.7. Taints and tolerations Understand and work with taints and tolerations. 7.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 7.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 7.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 7.7.3. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 7.7.4. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 7.7.5. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 7.7.6. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 7.8. Topology Manager Understand and work with Topology Manager. 7.8.1. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 7.8.2. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 7.8.3. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. 7.9. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted. 7.10. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 7.10.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 7.10.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 7.10.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 7.11. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 7.11.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 7.11.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 7.11.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 7.11.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 7.2. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 7.11.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 7.11.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 7.11.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 7.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 7.11.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 7.11.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 7.12. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 7.12.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Create or edit the namespace object file. Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 7.13. Freeing node resources using garbage collection Understand and use garbage collection. 7.13.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 7.3. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 7.13.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 7.4. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 7.13.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 7.14. Using the Node Tuning Operator Understand and use the Node Tuning Operator. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 7.14.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 7.14.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 7.14.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 7.14.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 7.15. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False | [
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"oc project openshift-machine-api",
"oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }",
"oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4",
"oc create -f <file-name>.yaml",
"oc get machineset",
"NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.26.0 ip-10-0-146-113.ec2.internal Ready master 127m v1.26.0 ip-10-0-153-35.ec2.internal Ready worker 118m v1.26.0 ip-10-0-176-58.ec2.internal Ready master 126m v1.26.0 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.26.0 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.26.0 ip-10-0-245-59.ec2.internal Ready worker 116m v1.26.0",
"oc debug node/<node-name> -- chroot /host lsblk",
"oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=set-kubelet-config",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-kubelet-config 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-kubelet-config -o yaml",
"spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"ββinit.scope β ββ1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 ββkubepods.slice ββkubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β ββcrio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β ββ32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"",
"apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/post-install-node-tasks |
Chapter 6. Managing image streams | Chapter 6. Managing image streams Image streams provide a means of creating and updating container images in an on-going way. As improvements are made to an image, tags can be used to assign new version numbers and keep track of changes. This document describes how image streams are managed. 6.1. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application does not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. 6.2. Configuring image streams An ImageStream object file contains the following elements. Imagestream object definition apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5 1 The name of the image stream. 2 Docker repository path where new images can be pushed to add or update them in this image stream. 3 The SHA identifier that this image stream tag currently references. Resources that reference this image stream tag use this identifier. 4 The SHA identifier that this image stream tag previously referenced. Can be used to rollback to an older image. 5 The image stream tag name. 6.3. Image stream images An image stream image points from within an image stream to a particular image ID. Image stream images allow you to retrieve metadata about an image from a particular image stream where it is tagged. Image stream image objects are automatically created in OpenShift Container Platform whenever you import or tag an image into the image stream. You should never have to explicitly define an image stream image object in any image stream definition that you use to create image streams. The image stream image consists of the image stream name and image ID from the repository, delimited by an @ sign: To refer to the image in the ImageStream object example, the image stream image looks like: 6.4. Image stream tags An image stream tag is a named pointer to an image in an image stream. It is abbreviated as istag . An image stream tag is used to reference or retrieve an image for a given image stream and tag. Image stream tags can reference any local or externally managed image. It contains a history of images represented as a stack of all images the tag ever pointed to. Whenever a new or existing image is tagged under particular image stream tag, it is placed at the first position in the history stack. The image previously occupying the top position is available at the second position. This allows for easy rollbacks to make tags point to historical images again. The following image stream tag is from an ImageStream object: Image stream tag with two images in its history kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream # ... tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest # ... Image stream tags can be permanent tags or tracking tags. Permanent tags are version-specific tags that point to a particular version of an image, such as Python 3.5. Tracking tags are reference tags that follow another image stream tag and can be updated to change which image they follow, like a symlink. These new levels are not guaranteed to be backwards-compatible. For example, the latest image stream tags that ship with OpenShift Container Platform are tracking tags. This means consumers of the latest image stream tag are updated to the newest level of the framework provided by the image when a new level becomes available. A latest image stream tag to v3.10 can be changed to v3.11 at any time. It is important to be aware that these latest image stream tags behave differently than the Docker latest tag. The latest image stream tag, in this case, does not point to the latest image in the Docker repository. It points to another image stream tag, which might not be the latest version of an image. For example, if the latest image stream tag points to v3.10 of an image, when the 3.11 version is released, the latest tag is not automatically updated to v3.11 , and remains at v3.10 until it is manually updated to point to a v3.11 image stream tag. Note Tracking tags are limited to a single image stream and cannot reference other image streams. You can create your own image stream tags for your own needs. The image stream tag is composed of the name of the image stream and a tag, separated by a colon: For example, to refer to the sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d image in the ImageStream object example earlier, the image stream tag would be: 6.5. Image stream change triggers Image stream triggers allow your builds and deployments to be automatically invoked when a new version of an upstream image is available. For example, builds and deployments can be automatically started when an image stream tag is modified. This is achieved by monitoring that particular image stream tag and notifying the build or deployment when a change is detected. 6.6. Image stream mapping When the integrated registry receives a new image, it creates and sends an image stream mapping to OpenShift Container Platform, providing the image's project, name, tag, and image metadata. Note Configuring image stream mappings is an advanced feature. This information is used to create a new image, if it does not already exist, and to tag the image into the image stream. OpenShift Container Platform stores complete metadata about each image, such as commands, entry point, and environment variables. Images in OpenShift Container Platform are immutable and the maximum name length is 63 characters. The following image stream mapping example results in an image being tagged as test/origin-ruby-sample:latest : Image stream mapping object definition apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: "1001" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: "1001" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: "1.0" kind: DockerImage dockerImageMetadataVersion: "1.0" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 6.7. Working with image streams The following sections describe how to use image streams and image stream tags. 6.7.1. Getting information about image streams You can get general information about the image stream and detailed information about all the tags it is pointing to. Procedure To get general information about the image stream and detailed information about all the tags it is pointing to, enter the following command: USD oc describe is/<image-name> For example: USD oc describe is/python Example output Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago To get all of the information available about a particular image stream tag, enter the following command: USD oc describe istag/<image-stream>:<tag-name> For example: USD oc describe istag/python:latest Example output Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801 Note More information is output than shown. Enter the following command to discover which architecture or operating system that an image stream tag supports: USD oc get istag <image-stream-tag> -ojsonpath="{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\n'}{end}" For example: USD oc get istag busybox:latest -ojsonpath="{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\n'}{end}" Example output linux/amd64 linux/arm linux/arm64 linux/386 linux/mips64le linux/ppc64le linux/riscv64 linux/s390x 6.7.2. Adding tags to an image stream You can add additional tags to image streams. Procedure Add a tag that points to one of the existing tags by using the `oc tag`command: USD oc tag <image-name:tag1> <image-name:tag2> For example: USD oc tag python:3.5 python:latest Example output Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25. Confirm the image stream has two tags, one, 3.5 , pointing at the external container image and another tag, latest , pointing to the same image because it was created based on the first tag. USD oc describe is/python Example output Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago 6.7.3. Adding tags for an external image You can add tags for external images. Procedure Add tags pointing to internal or external images, by using the oc tag command for all tag-related operations: USD oc tag <repository/image> <image-name:tag> For example, this command maps the docker.io/python:3.6.0 image to the 3.6 tag in the python image stream. USD oc tag docker.io/python:3.6.0 python:3.6 Example output Tag python:3.6 set to docker.io/python:3.6.0. If the external image is secured, you must create a secret with credentials for accessing that registry. 6.7.4. Updating image stream tags You can update a tag to reflect another tag in an image stream. Procedure Update a tag: USD oc tag <image-name:tag> <image-name:latest> For example, the following updates the latest tag to reflect the 3.6 tag in an image stream: USD oc tag python:3.6 python:latest Example output Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f. 6.7.5. Removing image stream tags You can remove old tags from an image stream. Procedure Remove old tags from an image stream: USD oc tag -d <image-name:tag> For example: USD oc tag -d python:3.6 Example output Deleted tag default/python:3.6 See Removing deprecated image stream tags from the Cluster Samples Operator for more information on how the Cluster Samples Operator handles deprecated image stream tags. 6.7.6. Configuring periodic importing of image stream tags When working with an external container image registry, to periodically re-import an image, for example to get latest security updates, you can use the --scheduled flag. Procedure Schedule importing images: USD oc tag <repository/image> <image-name:tag> --scheduled For example: USD oc tag docker.io/python:3.6.0 python:3.6 --scheduled Example output Tag python:3.6 set to import docker.io/python:3.6.0 periodically. This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Remove the periodic check, re-run above command but omit the --scheduled flag. This will reset its behavior to default. USD oc tag <repositiory/image> <image-name:tag> 6.8. Importing and working with images and image streams The following sections describe how to import, and work with, image streams. 6.8.1. Importing images and image streams from private registries An image stream can be configured to import tag and image metadata from private image registries requiring authentication. This procedures applies if you change the registry that the Cluster Samples Operator uses to pull content from to something other than registry.redhat.io . Note When importing from insecure or secure registries, the registry URL defined in the secret must include the :80 port suffix or the secret is not used when attempting to import from the registry. Procedure You must create a secret object that is used to store your credentials by entering the following command: USD oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson After the secret is configured, create the new image stream or enter the oc import-image command: USD oc import-image <imagestreamtag> --from=<image> --confirm During the import process, OpenShift Container Platform picks up the secrets and provides them to the remote party. 6.8.1.1. Allowing pods to reference images from other secured registries To pull a secured container from other private or secured registries, you must create a pull secret from your container client credentials, such as Docker or Podman, and add it to your service account. Both Docker and Podman use a configuration file to store authentication details to log in to secured or insecure registry: Docker : By default, Docker uses USDHOME/.docker/config.json . Podman : By default, Podman uses USDHOME/.config/containers/auth.json . These files store your authentication information if you have previously logged in to a secured or insecure registry. Note Both Docker and Podman credential files and the associated pull secret can contain multiple references to the same registry if they have unique paths, for example, quay.io and quay.io/<example_repository> . However, neither Docker nor Podman support multiple entries for the exact same registry path. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque Procedure Create a secret from an existing authentication file: For Docker clients using .docker/config.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson For Podman clients using .config/containers/auth.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=<path/to/.config/containers/auth.json> \ --type=kubernetes.io/podmanconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull 6.8.2. Working with manifest lists You can import a single sub-manifest, or all manifests, of a manifest list when using oc import-image or oc tag CLI commands by adding the --import-mode flag. Refer to the commands below to create an image stream that includes a single sub-manifest or multi-architecture images. Procedure Create an image stream that includes multi-architecture images, and sets the import mode to PreserveOriginal , by entering the following command: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> \ --import-mode='PreserveOriginal' --reference-policy=local --confirm Example output --- Arch: <none> Manifests: linux/amd64 sha256:6e325b86566fafd3c4683a05a219c30c421fbccbf8d87ab9d20d4ec1131c3451 linux/arm64 sha256:d8fad562ffa75b96212c4a6dc81faf327d67714ed85475bf642729703a2b5bf6 linux/ppc64le sha256:7b7e25338e40d8bdeb1b28e37fef5e64f0afd412530b257f5b02b30851f416e1 --- Alternatively, enter the following command to import an image with the Legacy import mode, which discards manifest lists and imports a single sub-manifest: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> \ --import-mode='Legacy' --confirm Note The --import-mode= default value is Legacy . Excluding this value, or failing to specify either Legacy or PreserveOriginal , imports a single sub-manifest. An invalid import mode returns the following error: error: valid ImportMode values are Legacy or PreserveOriginal . Limitations Working with manifest lists has the following limitations: In some cases, users might want to use sub-manifests directly. When oc adm prune images is run, or the CronJob pruner runs, they cannot detect when a sub-manifest list is used. As a result, an administrator using oc adm prune images , or the CronJob pruner, might delete entire manifest lists, including sub-manifests. To avoid this limitation, you can use the manifest list by tag or by digest instead. 6.8.2.1. Configuring periodic importing of manifest lists To periodically re-import a manifest list, you can use the --scheduled flag. Procedure Set the image stream to periodically update the manifest list by entering the following command: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> \ --import-mode='PreserveOriginal' --scheduled=true 6.8.2.2. Configuring SSL/TSL when importing manifest lists To configure SSL/TSL when importing a manifest list, you can use the --insecure flag. Procedure Set --insecure=true so that importing a manifest list skips SSL/TSL verification. For example: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> \ --import-mode='PreserveOriginal' --insecure=true 6.8.3. Specifying architecture for --import-mode You can swap your imported image stream between multi-architecture and single architecture by excluding or including the --import-mode= flag Procedure Run the following command to update your image stream from multi-architecture to single architecture by excluding the --import-mode= flag: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> Run the following command to update your image stream from single-architecture to multi-architecture: USD oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> \ --import-mode='PreserveOriginal' 6.8.4. Configuration fields for --import-mode The following table describes the options available for the --import-mode= flag: Parameter Description Legacy The default option for --import-mode . When specified, the manifest list is discarded, and a single sub-manifest is imported. The platform is chosen in the following order of priority: Tag annotations Control plane architecture Linux/AMD64 The first manifest in the list PreserveOriginal When specified, the original manifest is preserved. For manifest lists, the manifest list and all of its sub-manifests are imported. | [
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5",
"<image-stream-name>@<image-id>",
"origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest",
"<imagestream name>:<tag>",
"origin-ruby-sample:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d",
"oc describe is/<image-name>",
"oc describe is/python",
"Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago",
"oc describe istag/<image-stream>:<tag-name>",
"oc describe istag/python:latest",
"Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801",
"oc get istag <image-stream-tag> -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"",
"oc get istag busybox:latest -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"",
"linux/amd64 linux/arm linux/arm64 linux/386 linux/mips64le linux/ppc64le linux/riscv64 linux/s390x",
"oc tag <image-name:tag1> <image-name:tag2>",
"oc tag python:3.5 python:latest",
"Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.",
"oc describe is/python",
"Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago",
"oc tag <repository/image> <image-name:tag>",
"oc tag docker.io/python:3.6.0 python:3.6",
"Tag python:3.6 set to docker.io/python:3.6.0.",
"oc tag <image-name:tag> <image-name:latest>",
"oc tag python:3.6 python:latest",
"Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.",
"oc tag -d <image-name:tag>",
"oc tag -d python:3.6",
"Deleted tag default/python:3.6",
"oc tag <repository/image> <image-name:tag> --scheduled",
"oc tag docker.io/python:3.6.0 python:3.6 --scheduled",
"Tag python:3.6 set to import docker.io/python:3.6.0 periodically.",
"oc tag <repositiory/image> <image-name:tag>",
"oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson",
"oc import-image <imagestreamtag> --from=<image> --confirm",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --reference-policy=local --confirm",
"--- Arch: <none> Manifests: linux/amd64 sha256:6e325b86566fafd3c4683a05a219c30c421fbccbf8d87ab9d20d4ec1131c3451 linux/arm64 sha256:d8fad562ffa75b96212c4a6dc81faf327d67714ed85475bf642729703a2b5bf6 linux/ppc64le sha256:7b7e25338e40d8bdeb1b28e37fef5e64f0afd412530b257f5b02b30851f416e1 ---",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='Legacy' --confirm",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --scheduled=true",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --insecure=true",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name>",
"oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/managing-image-streams |
Chapter 3. Configuring certificates issued by ADCS for smart card authentication in IdM | Chapter 3. Configuring certificates issued by ADCS for smart card authentication in IdM To configure smart card authentication in IdM for users whose certificates are issued by Active Directory (AD) certificate services: Your deployment is based on cross-forest trust between Identity Management (IdM) and Active Directory (AD). You want to allow smart card authentication for users whose accounts are stored in AD. Certificates are created and stored in Active Directory Certificate Services (ADCS). For an overview of smart card authentication, see Understanding smart card authentication . Configuration is accomplished in the following steps: Copying CA and user certificates from Active Directory to the IdM server and client Configuring the IdM server and clients for smart card authentication using ADCS certificates Converting a PFX (PKCS#12) file to be able to store the certificate and private key into the smart card Configuring timeouts in the sssd.conf file Creating certificate mapping rules for smart card authentication Prerequisites Identity Management (IdM) and Active Directory (AD) trust is installed For details, see Installing trust between IdM and AD . Active Directory Certificate Services (ADCS) is installed and certificates for users are generated 3.1. Windows Server settings required for trust configuration and certificate usage You must configure the following on the Windows Server: Active Directory Certificate Services (ADCS) is installed Certificate Authority is created Optional: If you are using Certificate Authority Web Enrollment, the Internet Information Services (IIS) must be configured Export the certificate: Key must have 2048 bits or more Include a private key You will need a certificate in the following format: Personal Information Exchange - PKCS #12(.PFX) Enable certificate privacy 3.2. Copying certificates from Active Directory using sftp To be able to use smart card authetication, you need to copy the following certificate files: A root CA certificate in the CER format: adcs-winserver-ca.cer on your IdM server. A user certificate with a private key in the PFX format: aduser1.pfx on an IdM client. Note This procedure expects SSH access is allowed. If SSH is unavailable the user must copy the file from the AD Server to the IdM server and client. Procedure Connect from the IdM server and copy the adcs-winserver-ca.cer root certificate to the IdM server: Connect from the IdM client and copy the aduser1.pfx user certificate to the client: Now the CA certificate is stored in the IdM server and the user certificates is stored on the client machine. 3.3. Configuring the IdM server and clients for smart card authentication using ADCS certificates You must configure the IdM (Identity Management) server and clients to be able to use smart card authentication in the IdM environment. IdM includes the ipa-advise scripts which makes all necessary changes: Install necessary packages Configure IdM server and clients Copy the CA certificates into the expected locations You can run ipa-advise on your IdM server. Follow this procedure to configure your server and clients for smart card authentication: On an IdM server: Preparing the ipa-advise script to configure your IdM server for smart card authentication. On an IdM server: Preparing the ipa-advise script to configure your IdM client for smart card authentication. On an IdM server: Applying the the ipa-advise server script on the IdM server using the AD certificate. Moving the client script to the IdM client machine. On an IdM client: Applying the the ipa-advise client script on the IdM client using the AD certificate. Prerequisites The certificate has been copied to the IdM server. Obtain the Kerberos ticket. Log in as a user with administration rights. Procedure On the IdM server, use the ipa-advise script for configuring a client: On the IdM server, use the ipa-advise script for configuring a server: On the IdM server, execute the script: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Copy the sc_client.sh script to the client system: Copy the Windows certificate to the client system: On the client system, run the client script: The CA certificate is installed in the correct format on the IdM server and client systems and step is to copy the user certificates onto the smart card itself. 3.4. Converting the PFX file Before you store the PFX (PKCS#12) file into the smart card, you must: Convert the file to the PEM format Extract the private key and the certificate to two different files Prerequisites The PFX file is copied into the IdM client machine. Procedure On the IdM client, into the PEM format: Extract the key into the separate file: Extract the public certificate into the separate file: At this point, you can store the aduser1.key and aduser1.crt into the smart card. 3.5. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 3.6. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 3.7. Configuring timeouts in sssd.conf Authentication with a smart card certificate might take longer than the default timeouts used by SSSD. Time out expiration can be caused by: Slow reader A forwarding form a physical device into a virtual environment Too many certificates stored on the smart card Slow response from the OCSP (Online Certificate Status Protocol) responder if OCSP is used to verify the certificates In this case you can prolong the following timeouts in the sssd.conf file, for example, to 60 seconds: p11_child_timeout krb5_auth_timeout Prerequisites You must be logged in as root. Procedure Open the sssd.conf file: Change the value of p11_child_timeout : Change the value of krb5_auth_timeout : Save the settings. Now, the interaction with the smart card is allowed to run for 1 minute (60 seconds) before authentication will fail with a timeout. 3.8. Creating certificate mapping rules for smart card authentication If you want to use one certificate for a user who has accounts in AD (Active Directory) and in IdM (Identity Management), you can create a certificate mapping rule on the IdM server. After creating such a rule, the user is able to authenticate with their smart card in both domains. For details about certificate mapping rules, see Certificate mapping rules for configuring authentication . | [
"root@idmserver ~]# sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd <Path to certificates> sftp> ls adcs-winserver-ca.cer aduser1.pfx sftp> sftp> get adcs-winserver-ca.cer Fetching <Path to certificates>/adcs-winserver-ca.cer to adcs-winserver-ca.cer <Path to certificates>/adcs-winserver-ca.cer 100% 1254 15KB/s 00:00 sftp quit",
"sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd /<Path to certificates> sftp> get aduser1.pfx Fetching <Path to certificates>/aduser1.pfx to aduser1.pfx <Path to certificates>/aduser1.pfx 100% 1254 15KB/s 00:00 sftp quit",
"ipa-advise config-client-for-smart-card-auth > sc_client.sh",
"ipa-advise config-server-for-smart-card-auth > sc_server.sh",
"sh -x sc_server.sh adcs-winserver-ca.cer",
"scp sc_client.sh [email protected]:/root Password: sc_client.sh 100% 2857 1.6MB/s 00:00",
"scp adcs-winserver-ca.cer [email protected]:/root Password: adcs-winserver-ca.cer 100% 1254 952.0KB/s 00:00",
"sh -x sc_client.sh adcs-winserver-ca.cer",
"openssl pkcs12 -in aduser1.pfx -out aduser1_cert_only.pem -clcerts -nodes Enter Import Password:",
"openssl pkcs12 -in adduser1.pfx -nocerts -out adduser1.pem > aduser1.key",
"openssl pkcs12 -in adduser1.pfx -clcerts -nokeys -out aduser1_cert_only.pem > aduser1.crt",
"dnf -y install opensc gnutls-utils",
"systemctl start pcscd",
"systemctl status pcscd",
"pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:",
"pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name",
"pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name",
"pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init -F",
"vim /etc/sssd/sssd.conf",
"[pam] p11_child_timeout = 60",
"[domain/IDM.EXAMPLE.COM] krb5_auth_timeout = 60"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/configuring-certificates-issued-by-adcs-for-smart-card-authentication-in-idm_managing-smart-card-authentication |
Chapter 27. Managing global DNS configuration in IdM using Ansible playbooks | Chapter 27. Managing global DNS configuration in IdM using Ansible playbooks Using the Red Hat Ansible Engine dnsconfig module, you can configure global configuration for Identity Management (IdM) DNS. Settings defined in global DNS configuration are applied to all IdM DNS servers. However, the global configuration has lower priority than the configuration for a specific IdM DNS zone. The dnsconfig module supports the following variables: The global forwarders, specifically their IP addresses and the port used for communication. The global forwarding policy: only, first, or none. For more details on these types of DNS forward policies, see DNS forward policies in IdM . The synchronization of forward lookup and reverse lookup zones. Prerequisites DNS service is installed on the IdM server. For more information about how to install an IdM server with integrated DNS, see one of the following links: Installing an IdM server: With integrated DNS, with an integrated CA as the root CA Installing an IdM server: With integrated DNS, with an external CA as the root CA Installing an IdM server: With integrated DNS, without a CA This chapter includes the following sections: How IdM ensures that global forwarders from /etc/resolv.conf are not removed by NetworkManager Ensuring the presence of a DNS global forwarder in IdM using Ansible Ensuring the absence of a DNS global forwarder in IdM using Ansible The action: member option in ipadnsconfig ansible-freeipa modules An introduction to DNS forward policies in IdM Using an Ansible playbook to ensure that the forward first policy is set in IdM DNS global configuration Using an Ansible playbook to ensure that global forwarders are disabled in IdM DNS Using an Ansible playbook to ensure that synchronization of forward and reverse lookup zones is disabled in IdM DNS 27.1. How IdM ensures that global forwarders from /etc/resolv.conf are not removed by NetworkManager Installing Identity Management (IdM) with integrated DNS configures the /etc/resolv.conf file to point to the 127.0.0.1 localhost address: In certain environments, such as networks that use Dynamic Host Configuration Protocol (DHCP), the NetworkManager service may revert changes to the /etc/resolv.conf file. To make the DNS configuration persistent, the IdM DNS installation process also configures the NetworkManager service in the following way: The DNS installation script creates an /etc/NetworkManager/conf.d/zzz-ipa.conf NetworkManager configuration file to control the search order and DNS server list: The NetworkManager service is reloaded, which always creates the /etc/resolv.conf file with the settings from the last file in the /etc/NetworkManager/conf.d/ directory. This is in this case the zzz-ipa.conf file. Important Do not modify the /etc/resolv.conf file manually. 27.2. Ensuring the presence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the presence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the presence of a DNS global forwarder to a DNS server with an Internet Protocol (IP) v4 address of 7.7.9.9 and IP v6 address of 2001:db8::1:0 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-presence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the presence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 7.7.9.9 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:db8::1:0 . Verify the port value is set to 53 . Change the state to present . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory 27.3. Ensuring the absence of a DNS global forwarder in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure the absence of a DNS global forwarder in IdM. In the example procedure below, the IdM administrator ensures the absence of a DNS global forwarder with an Internet Protocol (IP) v4 address of 8.8.6.6 and IP v6 address of 2001:4860:4860::8800 on port 53 . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the forwarders-absent.yml Ansible playbook file. For example: Open the ensure-absence-of-a-global-forwarder.yml file for editing. Adapt the file by setting the following variables: Change the name variable for the playbook to Playbook to ensure the absence of a global forwarder in IdM DNS . In the tasks section, change the name of the task to Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 . In the forwarders section of the ipadnsconfig portion: Change the first ip_address value to the IPv4 address of the global forwarder: 8.8.6.6 . Change the second ip_address value to the IPv6 address of the global forwarder: 2001:4860:4860::8800 . Verify the port value is set to 53 . Set the action variable to member . Verify the state is set to absent . This the modified Ansible playbook file for the current example: Important If you only use the state: absent option in your playbook without also using action: member , the playbook fails. Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory The action: member option in ipadnsconfig ansible-freeipa modules 27.4. The action: member option in ipadnsconfig ansible-freeipa modules Excluding global forwarders in Identity Management (IdM) by using the ansible-freeipa ipadnsconfig module requires using the action: member option in addition to the state: absent option. If you only use state: absent in your playbook without also using action: member , the playbook fails. Consequently, to remove all global forwarders, you must specify all of them individually in the playbook. In contrast, the state: present option does not require action: member . The following table provides configuration examples for both adding and removing DNS global forwarders that demonstrate the correct use of the action: member option. The table shows, in each line: The global forwarders configured before executing a playbook An excerpt from the playbook The global forwarders configured after executing the playbook Table 27.1. ipadnsconfig management of global forwarders Forwarders before Playbook excerpt Forwarders after 8.8.6.6 8.8.6.7 8.8.6.6 8.8.6.6, 8.8.6.7 8.8.6.6, 8.8.6.7 Trying to execute the playbook results in an error. The original configuration - 8.8.6.6, 8.8.6.7 - is left unchanged. 8.8.6.6, 8.8.6.7 8.8.6.6 27.5. DNS forward policies in IdM IdM supports the first and only standard BIND forward policies, as well as the none IdM-specific forward policy. Forward first (default) The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND falls back to the recursive resolution using servers on the Internet. The forward first policy is the default policy, and it is suitable for optimizing DNS traffic. Forward only The IdM BIND service forwards DNS queries to the configured forwarder. If a query fails because of a server error or timeout, BIND returns an error to the client. The forward only policy is recommended for environments with split DNS configuration. None (forwarding disabled) DNS queries are not forwarded with the none forwarding policy. Disabling forwarding is only useful as a zone-specific override for global forwarding configuration. This option is the IdM equivalent of specifying an empty list of forwarders in BIND configuration. Note You cannot use forwarding to combine data in IdM with data from other DNS servers. You can only forward queries for specific subzones of the primary zone in IdM DNS. By default, the BIND service does not forward queries to another server if the queried DNS name belongs to a zone for which the IdM server is authoritative. In such a situation, if the queried DNS name cannot be found in the IdM database, the NXDOMAIN answer is returned. Forwarding is not used. Example 27.1. Example Scenario The IdM server is authoritative for the test.example. DNS zone. BIND is configured to forward queries to the DNS server with the 192.0.2.254 IP address. When a client sends a query for the nonexistent.test.example. DNS name, BIND detects that the IdM server is authoritative for the test.example. zone and does not forward the query to the 192.0.2.254. server. As a result, the DNS client receives the NXDomain error message, informing the user that the queried domain does not exist. 27.6. Using an Ansible playbook to ensure that the forward first policy is set in IdM DNS global configuration Follow this procedure to use an Ansible playbook to ensure that global forwarding policy in IdM DNS is set to forward first . If you use the forward first DNS forwarding policy, DNS queries are forwarded to the configured forwarder. If a query fails because of a server error or timeout, BIND falls back to the recursive resolution using servers on the Internet. The forward first policy is the default policy. It is suitable for traffic optimization. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Your IdM environment contains an integrated DNS server. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the set-configuration.yml Ansible playbook file. For example: Open the set-forward-policy-to-first.yml file for editing. Adapt the file by setting the following variables in the ipadnsconfig task section: Set the ipaadmin_password variable to your IdM administrator password. Set the forward_policy variable to first . Delete all the other lines of the original playbook that are irrelevant. This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS forward policies in IdM The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory For more sample playbooks, see the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory. 27.7. Using an Ansible playbook to ensure that global forwarders are disabled in IdM DNS Follow this procedure to use an Ansible playbook to ensure that global forwarders are disabled in IdM DNS. The disabling is done by setting the forward_policy variable to none . Disabling global forwarders causes DNS queries not to be forwarded. Disabling forwarding is only useful as a zone-specific override for global forwarding configuration. This option is the IdM equivalent of specifying an empty list of forwarders in BIND configuration. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Your IdM environment contains an integrated DNS server. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the disable-global-forwarders.yml Ansible playbook file. For example: Open the disable-global-forwarders-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsconfig task section: Set the ipaadmin_password variable to your IdM administrator password. Set the forward_policy variable to none . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS forward policies in IdM The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory 27.8. Using an Ansible playbook to ensure that synchronization of forward and reverse lookup zones is disabled in IdM DNS Follow this procedure to use an Ansible playbook to ensure that forward and reverse lookup zones are not synchronized in IdM DNS. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. Your IdM environment contains an integrated DNS server. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the disallow-reverse-sync.yml Ansible playbook file. For example: Open the disallow-reverse-sync-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsconfig task section: Set the ipaadmin_password variable to your IdM administrator password. Set the allow_sync_ptr variable to no . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources The README-dnsconfig.md file in the /usr/share/doc/ansible-freeipa/ directory For more sample playbooks, see the /usr/share/doc/ansible-freeipa/playbooks/dnsconfig directory. | [
"Generated by NetworkManager search idm.example.com nameserver 127.0.0.1",
"auto-generated by IPA installer [main] dns=default [global-dns] searches=USDDOMAIN [global-dns-domain-*] servers=127.0.0.1",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-presence-of-a-global-forwarder.yml",
"--- - name: Playbook to ensure the presence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the presence of a DNS global forwarder to 7.7.9.9 and 2001:db8::1:0 on port 53 ipadnsconfig: forwarders: - ip_address: 7.7.9.9 - ip_address: 2001:db8::1:0 port: 53 state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-of-a-global-forwarder.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp forwarders-absent.yml ensure-absence-of-a-global-forwarder.yml",
"--- - name: Playbook to ensure the absence of a global forwarder in IdM DNS hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the absence of a DNS global forwarder to 8.8.6.6 and 2001:4860:4860::8800 on port 53 ipadnsconfig: forwarders: - ip_address: 8.8.6.6 - ip_address: 2001:4860:4860::8800 port: 53 action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-absence-of-a-global-forwarder.yml",
"[...] tasks: - name: Ensure the presence of DNS global forwarder 8.8.6.7 ipadnsconfig: forwarders: - ip_address: 8.8.6.7 state: present",
"[...] tasks: - name: Ensure the presence of DNS global forwarder 8.8.6.7 ipadnsconfig: forwarders: - ip_address: 8.8.6.7 action: member state: present",
"[...] tasks: - name: Ensure the absence of DNS global forwarder 8.8.6.7 ipadnsconfig: forwarders: - ip_address: 8.8.6.7 state: absent",
"[...] tasks: - name: Ensure the absence of DNS global forwarder 8.8.6.7 ipadnsconfig: forwarders: - ip_address: 8.8.6.7 action: member state: absent",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp set-configuration.yml set-forward-policy-to-first.yml",
"--- - name: Playbook to set global forwarding policy to first hosts: ipaserver become: true tasks: - name: Set global forwarding policy to first. ipadnsconfig: ipaadmin_password: \"{{ ipaadmin_password }}\" forward_policy: first",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file set-forward-policy-to-first.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp disable-global-forwarders.yml disable-global-forwarders-copy.yml",
"--- - name: Playbook to disable global DNS forwarders hosts: ipaserver become: true tasks: - name: Disable global forwarders. ipadnsconfig: ipaadmin_password: \"{{ ipaadmin_password }}\" forward_policy: none",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file disable-global-forwarders-copy.yml",
"cd /usr/share/doc/ansible-freeipa/playbooks/dnsconfig",
"[ipaserver] server.idm.example.com",
"cp disallow-reverse-sync.yml disallow-reverse-sync-copy.yml",
"--- - name: Playbook to disallow reverse record synchronization hosts: ipaserver become: true tasks: - name: Disallow reverse record synchronization. ipadnsconfig: ipaadmin_password: \"{{ ipaadmin_password }}\" allow_sync_ptr: no",
"ansible-playbook --vault-password-file=password_file -v -i inventory.file disallow-reverse-sync-copy.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-global-dns-configuration-in-idm-using-ansible-playbooks_using-ansible-to-install-and-manage-idm |
Chapter 3. Adding ActiveDocs to 3scale | Chapter 3. Adding ActiveDocs to 3scale 3scale offers a framework to create interactive documentation for your API. With OpenAPI Specification (OAS) , you have functional documentation for your API, which will help your developers explore, test, and integrate with your API. 3.1. Setting up ActiveDocs in 3scale You can add ActiveDocs to your API in the 3scale user interface to obtain a framework for creating interactive documentation for your API. Prerequisites An OpenAPI document that defines your API. A 3scale 2.15 instance tenant's credentials ( token or provider_key ). Procedure Navigate to [your_API_name] ActiveDocs in your Admin Portal. 3scale displays the list of your service specifications for your API. This is initially empty. You can add as many service specifications as you want. Typically, each service specification corresponds to one of your APIs. For example, 3scale has specifications for each 3scale API , such as Service Management, Account Management, Analytics, and Billing. Click Create a new spec . When you add a new service specification, provide the following: Name System name This is required to reference the service specification from the Developer Portal. Choose whether you want the specification to be published or not. If you do not publish, the new specification will not be available in the Developer Portal. Note If you create, but do not publish your new specification, it will remain available to you for publication at a later time of your choosing. Add a description that is meant for only your consumption. Add the API JSON specification. Generate the specification of your API according to the specification proposed by OpenAPI Specification (OAS) . In this tutorial we assume that you already have a valid OAS-compliant specification of your API. Working with your first ActiveDoc After you add your first ActiveDoc, you can see it listed in [your_API_name] ActiveDocs . You can edit it as necessary, delete it, or switch it from public to private. You can detach it from your API or attach it to any other API. You can see all your ActiveDocs, whether or not they are attached to an API in Audience Developer Portal ActiveDocs . You can preview what your ActiveDocs looks like by clicking the name you gave the service specification, for example, Pet Store. You can do this even if the specification is not published yet. This is what an ActiveDoc looks like: | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/providing_apis_in_the_developer_portal/adding-activedocs-to-threescale_creating-a-new-service-based-on-oas |
7.57. fence-virt | 7.57. fence-virt 7.57.1. RHBA-2013:0419 - fence-virt bug fix and enhancement update Updated fence-virt packages that fix two bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The fence-virt packages provide a fencing agent for virtual machines as well as a host agent, which processes fencing requests. Bug Fixes BZ# 761228 Previously, the fence_virt man page contained incorrect information in the "SERIAL/VMCHANNEL PARAMETERS" section. With this update, the man page has been corrected. BZ# 853927 Previously, the fence_virtd daemon returned an incorrect error code to the fence_virt agent when the virt domain did not exist. Consequently, the fence_node utility occasionally failed to detect fencing. With this update, the error codes have been changed and the described error no longer occurs. Enhancements BZ# 823542 The "delay" (-w) option has been added to the fence_virt and fence_xvm fencing agents. The delay option can be used, for example, as a method of preloading a winner in a fence race in a CMAN cluster. BZ# 843104 With this update, the documentation of the "hash" parameter in the fence_virt.conf file has been improved to notify that hash is the weakest hashing algorithm allowed for client requests. All users of fence-virt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/fence-virt |
Chapter 10. OpenShift deployment options with the RHPAM Kogito Operator | Chapter 10. OpenShift deployment options with the RHPAM Kogito Operator After you create your Red Hat build of Kogito microservices as part of a business application, you can use the Red Hat OpenShift Container Platform web console to deploy your microservices. The RHPAM Kogito Operator page in the OpenShift web console guides you through the deployment process. The RHPAM Kogito Operator supports the following options for building and deploying Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform: Git source build and deployment Binary build and deployment Custom image build and deployment File build and deployment 10.1. Deploying Red Hat build of Kogito microservices on OpenShift using Git source build and OpenShift web console The RHPAM Kogito Operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild builds an application using the Git URL or other sources and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. In most use cases, you can use the standard runtime build and deployment method to deploy Red Hat build of Kogito microservices on OpenShift from a Git repository source, as shown in the following procedure. Note If you are developing or testing your Red Hat build of Kogito microservice locally, you can use the binary build, custom image build, or file build option to build and deploy from a local source instead of from a Git repository. Prerequisites The RHPAM Kogito Operator is installed. The application with your Red Hat build of Kogito microservices is in a Git repository that is reachable from your OpenShift environment. You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-quarkus-example # Git folder location of application Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-springboot-example # Git folder location of application Note If you configured an internal Maven repository, you can use it as a Maven mirror service and specify the Maven mirror URL in your Red Hat build of Kogito build definition to shorten build time substantially: spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/ For more information about internal Maven repositories, see the Apache Maven documentation. After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built from Git and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. Note For every Red Hat build of Kogito microservice that you create for OpenShift deployment, two builds are generated and listed in the Builds page in the web console: a traditional runtime build and a Source-to-Image (S2I) build with the suffix -builder . The S2I mechanism builds the application in an OpenShift build and then passes the built application to the OpenShift build to be packaged into the runtime container image. The Red Hat build of Kogito S2I build configuration also enables you to build the project directly from a Git repository on the OpenShift platform. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.2. Deploying Red Hat build of Kogito microservices on OpenShift using binary build and OpenShift web console OpenShift builds can require extensive amounts of time. As a faster alternative for building and deploying your Red Hat build of Kogito microservices on OpenShift, you can use a binary build. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild processes an uploaded application and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. Prerequisites The RHPAM Kogito Operator is installed. The oc OpenShift CLI is installed and you are logged in to the relevant OpenShift cluster. For oc installation and login instructions, see the OpenShift documentation . You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Build an application locally. Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: Binary Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: Binary After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. Upload the built binary using the following command: from-dir is equals to the target folder path of the built application. namespace is the namespace where KogitoBuild is created. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built locally and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.3. Deploying Red Hat build of Kogito microservices on OpenShift using custom image build and OpenShift web console You can use custom image build as an alternative for building and deploying your Red Hat build of Kogito microservices on OpenShift. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoRuntime starts the runtime image and configures it as per your requirements. Note The Red Hat Decision Manager builder image does not supports native builds. However, you can perform a custom build and use Containerfile to build the container image as shown in the following example: FROM registry.redhat.io/rhpam-7-tech-preview/rhpam-kogito-runtime-native-rhel8:7.13.5 ENV RUNTIME_TYPE quarkus COPY --chown=1001:root target/*-runner USDKOGITO_HOME/bin This feature is Technology Preview only. To build the native binary with Mandrel, see Compiling your Quarkus applications to native executables . Prerequisites The RHPAM Kogito Operator is installed. You have access to the OpenShift web console with the necessary permissions to create and edit KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Build an application locally. Create Containerfile in the project root folder with the following content: Example Containerfile for a Red Hat build of Quarkus application Example Containerfile for a Spring Boot application application-jar-file is the name of the JAR file of the application. Build the Red Hat build of Kogito image using the following command: In the command, final-image-name is the name of the Red Hat build of Kogito image and Container-file is name of the Containerfile that you created in the step. Optionally, test the built image using the following command: Push the built Red Hat build of Kogito image to an image registry using the following command: Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate runtime: springboot After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.4. Deploying Red Hat build of Kogito microservices on OpenShift using file build and OpenShift web console You can build and deploy your Red Hat build of Kogito microservices from a single file, such as a Decision Model and Notation (DMN), Drools Rule Language (DRL), or properties file, or from a directory with multiple files. You can specify a single file from your local file system path or specify a file directory from a local file system path only. When you upload the file or directory to an OpenShift cluster, a new Source-to-Image (S2I) build is automatically triggered. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild generates an application from a file and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. Prerequisites The RHPAM Kogito Operator is installed. The oc OpenShift CLI is installed and you are logged in to the relevant OpenShift cluster. For oc installation and login instructions, see the OpenShift documentation . You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . Procedure Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: LocalSource Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: LocalSource Note If you configured an internal Maven repository, you can use it as a Maven mirror service and specify the Maven mirror URL in your Red Hat build of Kogito build definition to shorten build time substantially: spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/ For more information about internal Maven repositories, see the Apache Maven documentation. After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. Upload the file asset using the following command: file-asset-path is the path of the file asset that you want to upload. namespace is the namespace where KogitoBuild is created. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built from a file and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. Note For every Red Hat build of Kogito microservice that you create for OpenShift deployment, two builds are generated and listed in the Builds page in the web console: a traditional runtime build and a Source-to-Image (S2I) build with the suffix -builder . The S2I mechanism builds the application in an OpenShift build and then passes the built application to the OpenShift build to be packaged into the runtime container image. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. | [
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-quarkus-example # Git folder location of application",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-springboot-example # Git folder location of application",
"spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: Binary",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: Binary",
"oc start-build example-quarkus --from-dir=target/ -n namespace",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"FROM registry.redhat.io/rhpam-7/rhpam-kogito-runtime-jvm-rhel8:7.13.5 ENV RUNTIME_TYPE quarkus COPY target/quarkus-app/lib/ USDKOGITO_HOME/bin/lib/ COPY target/quarkus-app/*.jar USDKOGITO_HOME/bin COPY target/quarkus-app/app/ USDKOGITO_HOME/bin/app/ COPY target/quarkus-app/quarkus/ USDKOGITO_HOME/bin/quarkus/",
"FROM registry.redhat.io/rhpam-7/rhpam-kogito-runtime-jvm-rhel8:7.13.5 ENV RUNTIME_TYPE springboot COPY target/<application-jar-file> USDKOGITO_HOME/bin",
"build --tag <final-image-name> -f <Container-file>",
"run --rm -it -p 8080:8080 <final-image-name>",
"push <final-image-name>",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate runtime: springboot",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: LocalSource",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: LocalSource",
"spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/",
"oc start-build example-quarkus-builder --from-file=<file-asset-path> -n namespace",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name",
"apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/con-kogito-operator-deployment-options_deploying-kogito-microservices-on-openshift |
Chapter 28. Clustering | Chapter 28. Clustering resource-agents component, BZ# 1077888 The CTDB agent used to implement High Availability samba does not work as expected in Red Hat Enterprise Linux 7. If you wish to configure clustered Samba for Red Hat Enterprise Linux 7, follow the steps in this Knowledgebase article: https://access.redhat.com/site/articles/912273 | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/known-issues-clustering |
function::task_euid | function::task_euid Name function::task_euid - The effective user identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the effective user id of the given task. | [
"task_euid:long(task:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-euid |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/proc_providing-feedback-on-red-hat-documentation_default |
2.3. Migration | 2.3. Migration Migration describes the process of moving a guest virtual machine from one host to another. This is possible because the virtual machines are running in a virtualized environment instead of directly on the hardware. There are two ways to migrate a virtual machine: live and offline. Migration Types Offline migration An offline migration suspends the guest virtual machine, and then moves an image of the virtual machine's memory to the destination host. The virtual machine is then resumed on the destination host and the memory used by the virtual machine on the source host is freed. Live migration Live migration is the process of migrating an active virtual machine from one physical host to another. Note that this is not possible between all Red Hat Enterprise Linux releases. Consult the Virtualization Administration Guide for details. 2.3.1. Benefits of Migrating Virtual Machines Migration is useful for: Load balancing When a host machine is overloaded, one or more of its virtual machines could be migrated to other hosts using live migration. Similarly, machines that are not running and tend to overload can be migrated using offline migration. Upgrading or making changes to the host When the need arises to upgrade, add, or remove hardware devices on a host, virtual machines can be safely relocated to other hosts. This means that guests do not experience any downtime due to changes that are made to hosts. Energy saving Virtual machines can be redistributed to other hosts and the unloaded host systems can be powered off to save energy and cut costs in low usage periods. Geographic migration Virtual machines can be moved to other physical locations for lower latency or for other reasons. When the migration process moves a virtual machine's memory, from Red Hat Enterprise Linux 6.3, the disk volume associated with the virtual machine is also migrated. This process is performed using live block migration. Shared, networked storage can be used to store guest images to be migrated. When migrating virtual machines, it is recommended to use libvirt -managed storage pools for shared storage. Note For more information on migration, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-migration |
6.10. Enabling and Disabling Cluster Resources | 6.10. Enabling and Disabling Cluster Resources The following command enables the resource specified by resource_id . The following command disables the resource specified by resource_id . | [
"pcs resource enable resource_id",
"pcs resource disable resource_id"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-starting_stopping_resources-haar |
High Availability Guide | High Availability Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | [
"aws ec2 create-vpc --cidr-block 192.168.0.0/16 --tag-specifications \"ResourceType=vpc, Tags=[{Key=AuroraCluster,Value=keycloak-aurora}]\" \\ 1 --region eu-west-1",
"{ \"Vpc\": { \"CidrBlock\": \"192.168.0.0/16\", \"DhcpOptionsId\": \"dopt-0bae7798158bc344f\", \"State\": \"pending\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"InstanceTenancy\": \"default\", \"Ipv6CidrBlockAssociationSet\": [], \"CidrBlockAssociationSet\": [ { \"AssociationId\": \"vpc-cidr-assoc-09a02a83059ba5ab6\", \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockState\": { \"State\": \"associated\" } } ], \"IsDefault\": false } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1a\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.0.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1a\", \"AvailabilityZoneId\": \"euw1-az3\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.0.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-0d491a1a798aa878d\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-0d491a1a798aa878d\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 create-subnet --availability-zone \"eu-west-1b\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.32.0/19 --region eu-west-1",
"{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1b\", \"AvailabilityZoneId\": \"euw1-az1\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.32.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-057181b1e3728530e\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-057181b1e3728530e\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }",
"aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"RouteTables\": [ { \"Associations\": [ { \"Main\": true, \"RouteTableAssociationId\": \"rtbassoc-02dfa06f4c7b4f99a\", \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"AssociationState\": { \"State\": \"associated\" } } ], \"PropagatingVgws\": [], \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"Routes\": [ { \"DestinationCidrBlock\": \"192.168.0.0/16\", \"GatewayId\": \"local\", \"Origin\": \"CreateRouteTable\", \"State\": \"active\" } ], \"Tags\": [], \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\" } ] }",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-0d491a1a798aa878d --region eu-west-1",
"aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-057181b1e3728530e --region eu-west-1",
"aws rds create-db-subnet-group --db-subnet-group-name keycloak-aurora-subnet-group --db-subnet-group-description \"Aurora DB Subnet Group\" --subnet-ids subnet-0d491a1a798aa878d subnet-057181b1e3728530e --region eu-west-1",
"aws ec2 create-security-group --group-name keycloak-aurora-security-group --description \"Aurora DB Security Group\" --vpc-id vpc-0b40bd7c59dbe4277 --region eu-west-1",
"{ \"GroupId\": \"sg-0d746cc8ad8d2e63b\" }",
"aws rds create-db-cluster --db-cluster-identifier keycloak-aurora --database-name keycloak --engine aurora-postgresql --engine-version USD{properties[\"aurora-postgresql.version\"]} --master-username keycloak --master-user-password secret99 --vpc-security-group-ids sg-0d746cc8ad8d2e63b --db-subnet-group-name keycloak-aurora-subnet-group --region eu-west-1",
"{ \"DBCluster\": { \"AllocatedStorage\": 1, \"AvailabilityZones\": [ \"eu-west-1b\", \"eu-west-1c\", \"eu-west-1a\" ], \"BackupRetentionPeriod\": 1, \"DatabaseName\": \"keycloak\", \"DBClusterIdentifier\": \"keycloak-aurora\", \"DBClusterParameterGroup\": \"default.aurora-postgresql15\", \"DBSubnetGroup\": \"keycloak-aurora-subnet-group\", \"Status\": \"creating\", \"Endpoint\": \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"ReaderEndpoint\": \"keycloak-aurora.cluster-ro-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"MultiAZ\": false, \"Engine\": \"aurora-postgresql\", \"EngineVersion\": \"15.3\", \"Port\": 5432, \"MasterUsername\": \"keycloak\", \"PreferredBackupWindow\": \"02:21-02:51\", \"PreferredMaintenanceWindow\": \"fri:03:34-fri:04:04\", \"ReadReplicaIdentifiers\": [], \"DBClusterMembers\": [], \"VpcSecurityGroups\": [ { \"VpcSecurityGroupId\": \"sg-0d746cc8ad8d2e63b\", \"Status\": \"active\" } ], \"HostedZoneId\": \"Z29XKXDKYMONMX\", \"StorageEncrypted\": false, \"DbClusterResourceId\": \"cluster-IBWXUWQYM3MS5BH557ZJ6ZQU4I\", \"DBClusterArn\": \"arn:aws:rds:eu-west-1:606671647913:cluster:keycloak-aurora\", \"AssociatedRoles\": [], \"IAMDatabaseAuthenticationEnabled\": false, \"ClusterCreateTime\": \"2023-11-01T10:40:45.964000+00:00\", \"EngineMode\": \"provisioned\", \"DeletionProtection\": false, \"HttpEndpointEnabled\": false, \"CopyTagsToSnapshot\": false, \"CrossAccountClone\": false, \"DomainMemberships\": [], \"TagList\": [], \"AutoMinorVersionUpgrade\": true, \"NetworkType\": \"IPV4\" } }",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-1\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-2\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1",
"aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-1 --region eu-west-1 aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-2 --region eu-west-1",
"aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text",
"[ \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\" ]",
"aws ec2 describe-vpcs --filters \"Name=tag:AuroraCluster,Values=keycloak-aurora\" --query 'Vpcs[*].VpcId' --region eu-west-1 --output text",
"vpc-0b40bd7c59dbe4277",
"NODE=USD(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') aws ec2 describe-instances --filters \"Name=private-dns-name,Values=USD{NODE}\" --query 'Reservations[0].Instances[0].VpcId' --region eu-west-1 --output text",
"vpc-0b721449398429559",
"aws ec2 create-vpc-peering-connection --vpc-id vpc-0b721449398429559 \\ 1 --peer-vpc-id vpc-0b40bd7c59dbe4277 \\ 2 --peer-region eu-west-1 --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"OwnerId\": \"606671647913\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"ExpirationTime\": \"2023-11-08T13:26:30+00:00\", \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"initiating-request\", \"Message\": \"Initiating Request to 606671647913\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"aws ec2 wait vpc-peering-connection-exists --vpc-peering-connection-ids pcx-0cb23d66dea3dca9f",
"aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockSet\": [ { \"CidrBlock\": \"192.168.0.0/16\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"provisioning\", \"Message\": \"Provisioning\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }",
"ROSA_PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 describe-route-tables --filters \"Name=vpc-id,Values=vpc-0b721449398429559\" \"Name=association.main,Values=true\" \\ 1 --query \"RouteTables[*].RouteTableId\" --output text --region eu-west-1 ) aws ec2 create-route --route-table-id USD{ROSA_PUBLIC_ROUTE_TABLE_ID} --destination-cidr-block 192.168.0.0/16 \\ 2 --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1",
"AURORA_SECURITY_GROUP_ID=USD(aws ec2 describe-security-groups --filters \"Name=group-name,Values=keycloak-aurora-security-group\" --query \"SecurityGroups[*].GroupId\" --region eu-west-1 --output text ) aws ec2 authorize-security-group-ingress --group-id USD{AURORA_SECURITY_GROUP_ID} --protocol tcp --port 5432 --cidr 10.0.17.0/24 \\ 1 --region eu-west-1",
"{ \"Return\": true, \"SecurityGroupRules\": [ { \"SecurityGroupRuleId\": \"sgr-0785d2f04b9cec3f5\", \"GroupId\": \"sg-0d746cc8ad8d2e63b\", \"GroupOwnerId\": \"606671647913\", \"IsEgress\": false, \"IpProtocol\": \"tcp\", \"FromPort\": 5432, \"ToPort\": 5432, \"CidrIpv4\": \"10.0.17.0/24\" } ] }",
"USER=keycloak 1 PASSWORD=secret99 2 DATABASE=keycloak 3 HOST=USD(aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora \\ 4 --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text ) run -i --tty --rm debug --image=postgres:15 --restart=Never -- psql postgresql://USD{USER}:USD{PASSWORD}@USD{HOST}/USD{DATABASE}",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: hostname: hostname: <KEYCLOAK_URL_HERE> resources: requests: cpu: \"2\" memory: \"1250M\" limits: cpu: \"6\" memory: \"2250M\" db: vendor: postgres url: jdbc:aws-wrapper:postgresql://<AWS_AURORA_URL_HERE>:5432/keycloak poolMinSize: 30 1 poolInitialSize: 30 poolMaxSize: 30 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password image: <KEYCLOAK_IMAGE_HERE> 2 startOptimized: false 3 features: enabled: - multi-site 4 transaction: xaEnabled: false 5 additionalOptions: - name: http-max-queued-requests value: \"1000\" - name: log-console-output value: json - name: metrics-enabled 6 value: 'true' - name: http-pool-max-threads 7 value: \"66\" - name: db-driver value: software.amazon.jdbc.Driver http: tlsSecret: keycloak-tls-secret instances: 3",
"wait --for=condition=Ready keycloaks.k8s.keycloak.org/keycloak wait --for=condition=RollingUpdate=False keycloaks.k8s.keycloak.org/keycloak",
"spec: additionalOptions: - name: http-max-queued-requests value: \"1000\"",
"spec: ingress: enabled: true annotations: # When running load tests, disable sticky sessions on the OpenShift HAProxy router # to avoid receiving all requests on a single Red Hat build of Keycloak Pod. haproxy.router.openshift.io/balance: roundrobin haproxy.router.openshift.io/disable_cookies: 'true'",
"credentials: - username: developer password: strong-password roles: - admin",
"apiVersion: v1 kind: Secret type: Opaque metadata: name: connect-secret namespace: keycloak data: identities.yaml: Y3JlZGVudGlhbHM6CiAgLSB1c2VybmFtZTogZGV2ZWxvcGVyCiAgICBwYXNzd29yZDogc3Ryb25nLXBhc3N3b3JkCiAgICByb2xlczoKICAgICAgLSBhZG1pbgo= 1",
"create secret generic connect-secret --from-file=identities.yaml",
"apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: \"xsite-sa\" 2 type: kubernetes.io/service-account-token",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-A-token.txt",
"create sa -n keycloak xsite-sa policy add-role-to-user view -n keycloak -z xsite-sa create -f xsite-sa-secret-token.yaml get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d > Site-B-token.txt",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-B-token.txt)\"",
"create secret generic -n keycloak xsite-token-secret --from-literal=token=\"USD(cat Site-A-token.txt)\"",
"-n keycloak create secret generic xsite-keystore-secret --from-file=keystore.p12=\"./certs/keystore.p12\" \\ 1 --from-literal=password=secret \\ 2 --from-literal=type=pkcs12 3",
"-n keycloak create secret generic xsite-truststore-secret --from-file=truststore.p12=\"./certs/truststore.p12\" \\ 1 --from-literal=password=caSecret \\ 2 --from-literal=type=pkcs12 3",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-a 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-b 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-b 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan 1 namespace: keycloak annotations: infinispan.org/monitoring: 'true' 2 spec: replicas: 3 security: endpointSecretName: connect-secret 3 service: type: DataGrid sites: local: name: site-b 4 expose: type: Route 5 maxRelayNodes: 128 encryption: transportKeyStore: secretName: xsite-keystore-secret 6 alias: xsite 7 filename: keystore.p12 8 routerKeyStore: secretName: xsite-keystore-secret 9 alias: xsite 10 filename: keystore.p12 11 trustStore: secretName: xsite-truststore-secret 12 filename: truststore.p12 13 locations: - name: site-a 14 clusterName: infinispan namespace: keycloak 15 url: openshift://api.site-a 16 secretName: xsite-token-secret 17",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-b: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: sessions namespace: keycloak spec: clusterName: infinispan name: sessions template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" remoteTimeout: 14000 stateTransfer: chunkSize: 16 backups: mergePolicy: ALWAYS_REMOVE 1 site-a: 2 backup: strategy: \"SYNC\" 3 timeout: 13000 stateTransfer: chunkSize: 16",
"wait --for condition=WellFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"wait --for condition=CrossSiteViewFormed --timeout=300s infinispans.infinispan.org -n keycloak infinispan",
"apiVersion: v1 kind: Secret metadata: name: remote-store-secret namespace: keycloak type: Opaque data: username: ZGV2ZWxvcGVy # base64 encoding for 'developer' password: c2VjdXJlX3Bhc3N3b3Jk # base64 encoding for 'secure_password'",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: labels: app: keycloak name: keycloak namespace: keycloak spec: additionalOptions: - name: cache-remote-host 1 value: \"infinispan.keycloak.svc\" - name: cache-remote-port 2 value: \"11222\" - name: cache-remote-username 3 secret: name: remote-store-secret key: username - name: cache-remote-password 4 secret: name: remote-store-secret key: password - name: spi-connections-infinispan-quarkus-site-name 5 value: keycloak",
"HOSTNAME=USD(oc -n openshift-ingress get svc router-default -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='USD{HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}\" --region eu-west-1 \\ 1 --output json",
"[ { \"CanonicalHostedZoneId\": \"Z2IFOLAFXWLO4F\", \"DNSName\": \"ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com\" } ]",
"function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=(USD(echo USD1 | sha1sum )) aws route53 create-health-check --caller-reference \"USDREF\" --query \"HealthCheck.Id\" --no-cli-pager --output text --health-check-config ' { \"Type\": \"HTTPS\", \"ResourcePath\": \"/lb-check\", \"FullyQualifiedDomainName\": \"'USD1'\", \"Port\": 443, \"RequestInterval\": 30, \"FailureThreshold\": 1, \"EnableSNI\": true } ' } CLIENT_DOMAIN=\"client.keycloak-benchmark.com\" 1 PRIMARY_DOMAIN=\"primary.USD{CLIENT_DOMAIN}\" 2 BACKUP_DOMAIN=\"backup.USD{CLIENT_DOMAIN}\" 3 createHealthCheck USD{PRIMARY_DOMAIN} createHealthCheck USD{BACKUP_DOMAIN}",
"233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2",
"HOSTED_ZONE_ID=\"Z09084361B6LKQQRCVBEY\" 1 PRIMARY_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets --hosted-zone-id Z09084361B6LKQQRCVBEY --query \"ChangeInfo.Id\" --output text --change-batch ' { \"Comment\": \"Creating Record Set for 'USD{CLIENT_DOMAIN}'\", \"Changes\": [{ \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{PRIMARY_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{PRIMARY_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{BACKUP_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{BACKUP_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-primary-'USD{SUBDOMAIN}'\", \"Failover\": \"PRIMARY\", \"HealthCheckId\": \"'USD{PRIMARY_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-backup-'USD{SUBDOMAIN}'\", \"Failover\": \"SECONDARY\", \"HealthCheckId\": \"'USD{BACKUP_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }] } '",
"/change/C053410633T95FR9WN3YI",
"aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USD{CLIENT_DOMAIN} 1",
"cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: USDDOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"aws rds failover-db-cluster --db-cluster-identifier",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" } { \"site-b\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-b",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site take-offline --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"offline\" }",
"clearcache actionTokens clearcache authenticationSessions clearcache clientSessions clearcache loginFailures clearcache offlineClientSessions clearcache offlineSessions clearcache sessions clearcache work",
"site bring-online --all-caches --site=site-b",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-b",
"{ \"status\" : \"online\" }",
"-n keycloak exec -it pods/infinispan-0 -- ./bin/cli.sh --trustall --connect https://127.0.0.1:11222",
"Username: developer Password: [infinispan-0-29897@ISPN//containers/default]>",
"site push-site-state --all-caches --site=site-a",
"{ \"offlineClientSessions\" : \"ok\", \"authenticationSessions\" : \"ok\", \"sessions\" : \"ok\", \"clientSessions\" : \"ok\", \"work\" : \"ok\", \"offlineSessions\" : \"ok\", \"loginFailures\" : \"ok\", \"actionTokens\" : \"ok\" }",
"site status --all-caches --site=site-a",
"{ \"status\" : \"online\" }",
"site push-site-status --cache=actionTokens site push-site-status --cache=authenticationSessions site push-site-status --cache=clientSessions site push-site-status --cache=loginFailures site push-site-status --cache=offlineClientSessions site push-site-status --cache=offlineSessions site push-site-status --cache=sessions site push-site-status --cache=work",
"{ \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" } { \"site-a\" : \"OK\" }",
"site push-site-state --cache=<cache-name> --site=site-a",
"site clear-push-site-status --cache=actionTokens site clear-push-site-status --cache=authenticationSessions site clear-push-site-status --cache=clientSessions site clear-push-site-status --cache=loginFailures site clear-push-site-status --cache=offlineClientSessions site clear-push-site-status --cache=offlineSessions site clear-push-site-status --cache=sessions site clear-push-site-status --cache=work",
"\"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\" \"ok\"",
"aws rds failover-db-cluster --db-cluster-identifier",
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: take-offline namespace: keycloak 1 spec: cluster: infinispan 2 config: | 3 site take-offline --all-caches --site=site-a site status --all-caches --site=site-a",
"-n keycloak wait --for=jsonpath='{.status.phase}'=Succeeded Batch/take-offline"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/high_availability_guide/index |
Chapter 13. Image Storage (glance) Parameters | Chapter 13. Image Storage (glance) Parameters You can modify the glance service with image service parameters. Parameter Description CephClusterName The Ceph cluster name. The default value is ceph . CephConfigPath The path where the Ceph Cluster configuration files are stored on the host. The default value is /var/lib/tripleo-config/ceph . CinderEnableNVMeOFBackend Whether to enable or not the NVMeOF backend for OpenStack Block Storage (cinder). The default value is false . CinderNVMeOFAvailabilityZone The availability zone of the NVMeOF OpenStack Block Storage (cinder) backend. When set, it overrides the default CinderStorageAvailabilityZone. CinderNVMeOFTargetProtocol The target protocol, supported values are nvmet_rdma and nvmet_tcp. The default value is nvmet_rdma . EnableSQLAlchemyCollectd Set to true to enable the SQLAlchemy-collectd server plugin. The default value is false . GlanceApiOptVolumes List of optional volumes to be mounted. GlanceBackend The short name of the OpenStack Image Storage (glance) backend to use. Should be one of swift, rbd, cinder, or file. The default value is swift . GlanceBackendID The default backend's identifier. The default value is default_backend . GlanceCacheEnabled Enable OpenStack Image Storage (glance) Image Cache. The default value is False . GlanceCinderLockPath The oslo.coordination lock_path to use when glance is using cinder as a store. Using cinder's lock_path ensures os-brick operations are synchronized across both services. The default value is /var/lib/cinder/tmp . GlanceCinderMountPointBase The mount point base when glance is using cinder as store and cinder backend is NFS. This mount point is where the NFS volume is mounted on the glance node. The default value is /var/lib/glance/mnt . GlanceCinderVolumeType A unique volume type required for each cinder store while configuring multiple cinder stores as glance backends. The same volume types must be configured in OpenStack Block Storage (cinder) as well. The volume type must exist in cinder prior to any attempt to add an image in the associated cinder store. If no volume type is specified then cinder's default volume type will be used. GlanceCronDbPurgeAge Cron to purge database entries marked as deleted and older than USDage - Age. The default value is 30 . GlanceCronDbPurgeDestination Cron to purge database entries marked as deleted and older than USDage - Log destination. The default value is /var/log/glance/glance-rowsflush.log . GlanceCronDbPurgeHour Cron to purge database entries marked as deleted and older than USDage - Hour. The default value is 0 . GlanceCronDbPurgeMaxDelay Cron to purge database entries marked as deleted and older than USDage - Max Delay. The default value is 3600 . GlanceCronDbPurgeMaxRows Cron to purge database entries marked as deleted and older than USDage - Max Rows. The default value is 100 . GlanceCronDbPurgeMinute Cron to purge database entries marked as deleted and older than USDage - Minute. The default value is 1 . GlanceCronDbPurgeMonth Cron to purge database entries marked as deleted and older than USDage - Month. The default value is * . GlanceCronDbPurgeMonthday Cron to purge database entries marked as deleted and older than USDage - Month Day. The default value is * . GlanceCronDbPurgeUser Cron to purge database entries marked as deleted and older than USDage - User. The default value is glance . GlanceCronDbPurgeWeekday Cron to purge database entries marked as deleted and older than USDage - Week Day. The default value is * . GlanceDiskFormats List of allowed disk formats in Glance; all formats are allowed when left unset. GlanceEnabledImportMethods List of enabled Image Import Methods. Valid values in the list are glance-direct , web-download , or copy-image . The default value is web-download . GlanceIgnoreUserRoles List of user roles to be ignored for injecting image metadata properties. The default value is admin . GlanceImageCacheDir Base directory that the Image Cache uses. The default value is /var/lib/glance/image-cache . GlanceImageCacheMaxSize The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. The default value is 10737418240 . GlanceImageCacheStallTime The amount of time, in seconds, to let an image remain in the cache without being accessed. The default value is 86400 . GlanceImageConversionOutputFormat Desired output format for image conversion plugin. The default value is raw . GlanceImageImportPlugins List of enabled Image Import Plugins. Valid values in the list are image_conversion , inject_metadata , no_op . The default value is ['no_op'] . GlanceImageMemberQuota Maximum number of image members per image. Negative values evaluate to unlimited. The default value is 128 . GlanceImagePrefetcherInterval The interval in seconds to run periodic job cache_images. The default value is 300 . GlanceInjectMetadataProperties Metadata properties to be injected in image. GlanceLogFile The filepath of the file to use for logging messages from OpenStack Image Storage (glance). GlanceMultistoreConfig Dictionary of settings when configuring additional glance backends. The hash key is the backend ID, and the value is a dictionary of parameter values unique to that backend. Multiple rbd and cinder backends are allowed, but file and swift backends are limited to one each. Example: # Default glance store is rbd. GlanceBackend: rbd GlanceStoreDescription: Default rbd store # GlanceMultistoreConfig specifies a second rbd backend, plus a cinder # backend. GlanceMultistoreConfig: rbd2_store: GlanceBackend: rbd GlanceStoreDescription: Second rbd store CephClusterName: ceph2 # Override CephClientUserName if this cluster uses a different # client name. CephClientUserName: client2 cinder1_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-1 GlanceStoreDescription: First cinder store cinder2_store: GlanceBackend: cinder GlanceCinderVolumeType: volume-type-2 GlanceStoreDescription: Seconde cinder store . GlanceNetappNfsEnabled When using GlanceBackend: file , Netapp mounts NFS share for image storage. The default value is false . GlanceNfsEnabled When using GlanceBackend: file , mount NFS share for image storage. The default value is false . GlanceNfsOptions NFS mount options for image storage when GlanceNfsEnabled is true. The default value is _netdev,bg,intr,context=system_u:object_r:container_file_t:s0 . GlanceNfsShare NFS share to mount for image storage when GlanceNfsEnabled is true. GlanceNodeStagingUri URI that specifies the staging location to use when importing images. The default value is file:///var/lib/glance/staging . GlanceNotifierStrategy Strategy to use for OpenStack Image Storage (glance) notification queue. The default value is noop . GlancePassword The password for the image storage service and database account. GlanceShowMultipleLocations Whether to show multiple image locations e.g for copy-on-write support on RBD or Netapp backends. Potential security risk, see glance.conf for more information. The default value is false . GlanceSparseUploadEnabled When using GlanceBackend file and rbd to enable or not sparse upload. The default value is false . GlanceStagingNfsOptions NFS mount options for NFS image import staging. The default value is _netdev,bg,intr,context=system_u:object_r:container_file_t:s0 . GlanceStagingNfsShare NFS share to mount for image import staging. GlanceStoreDescription User facing description for the OpenStack Image Storage (glance) backend. The default value is Default glance store backend. . GlanceWorkers Set the number of workers for the image storage service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is true . MultipathdEnable Whether to enable the multipath daemon. The default value is false . NetappShareLocation Netapp share to mount for image storage (when GlanceNetappNfsEnabled is true). NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_image-storage-glance-parameters_overcloud_parameters |
Chapter 8. ImageStream [image.openshift.io/v1] | Chapter 8. ImageStream [image.openshift.io/v1] Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImageStreamSpec represents options for ImageStreams. status object ImageStreamStatus contains information about the state of this image stream. 8.1.1. .spec Description ImageStreamSpec represents options for ImageStreams. Type object Property Type Description dockerImageRepository string dockerImageRepository is optional, if specified this stream is backed by a container repository on this server Deprecated: This field is deprecated as of v3.7 and will be removed in a future release. Specify the source for the tags to be imported in each tag via the spec.tags.from reference instead. lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. tags array tags map arbitrary string values to specific image locators tags[] object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 8.1.2. .spec.lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 8.1.3. .spec.tags Description tags map arbitrary string values to specific image locators Type array 8.1.4. .spec.tags[] Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 8.1.5. .spec.tags[].importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 8.1.6. .spec.tags[].referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 8.1.7. .status Description ImageStreamStatus contains information about the state of this image stream. Type object Required dockerImageRepository Property Type Description dockerImageRepository string DockerImageRepository represents the effective location this stream may be accessed at. May be empty until the server determines where the repository is located publicDockerImageRepository string PublicDockerImageRepository represents the public location from where the image can be pulled outside the cluster. This field may be empty if the administrator has not exposed the integrated registry externally. tags array Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. tags[] object NamedTagEventList relates a tag to its image history. 8.1.8. .status.tags Description Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. Type array 8.1.9. .status.tags[] Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 8.1.10. .status.tags[].conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 8.1.11. .status.tags[].conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 8.1.12. .status.tags[].items Description Standard object's metadata. Type array 8.1.13. .status.tags[].items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 8.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagestreams GET : list or watch objects of kind ImageStream /apis/image.openshift.io/v1/watch/imagestreams GET : watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams DELETE : delete collection of ImageStream GET : list or watch objects of kind ImageStream POST : create an ImageStream /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams GET : watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name} DELETE : delete an ImageStream GET : read the specified ImageStream PATCH : partially update the specified ImageStream PUT : replace the specified ImageStream /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams/{name} GET : watch changes to an object of kind ImageStream. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/status GET : read status of the specified ImageStream PATCH : partially update status of the specified ImageStream PUT : replace status of the specified ImageStream 8.2.1. /apis/image.openshift.io/v1/imagestreams HTTP method GET Description list or watch objects of kind ImageStream Table 8.1. HTTP responses HTTP code Reponse body 200 - OK ImageStreamList schema 401 - Unauthorized Empty 8.2.2. /apis/image.openshift.io/v1/watch/imagestreams HTTP method GET Description watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. Table 8.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams HTTP method DELETE Description delete collection of ImageStream Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ImageStream Table 8.5. HTTP responses HTTP code Reponse body 200 - OK ImageStreamList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageStream Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body ImageStream schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 202 - Accepted ImageStream schema 401 - Unauthorized Empty 8.2.4. /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams HTTP method GET Description watch individual changes to a list of ImageStream. deprecated: use the 'watch' parameter with a list operation instead. Table 8.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name} Table 8.10. Global path parameters Parameter Type Description name string name of the ImageStream HTTP method DELETE Description delete an ImageStream Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageStream Table 8.13. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageStream Table 8.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.15. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageStream Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body ImageStream schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty 8.2.6. /apis/image.openshift.io/v1/watch/namespaces/{namespace}/imagestreams/{name} Table 8.19. Global path parameters Parameter Type Description name string name of the ImageStream HTTP method GET Description watch changes to an object of kind ImageStream. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/status Table 8.21. Global path parameters Parameter Type Description name string name of the ImageStream HTTP method GET Description read status of the specified ImageStream Table 8.22. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageStream Table 8.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.24. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageStream Table 8.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.26. Body parameters Parameter Type Description body ImageStream schema Table 8.27. HTTP responses HTTP code Reponse body 200 - OK ImageStream schema 201 - Created ImageStream schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/image_apis/imagestream-image-openshift-io-v1 |
Chapter 12. KafkaListenerAuthenticationCustom schema reference | Chapter 12. KafkaListenerAuthenticationCustom schema reference Used in: GenericKafkaListener Full list of KafkaListenerAuthenticationCustom schema properties To configure custom authentication, set the type property to custom . Custom authentication allows for any type of Kafka-supported authentication to be used. Example custom OAuth authentication configuration spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example A protocol map is generated that uses the sasl and tls values to determine which protocol to map to the listener. SASL = True, TLS = True SASL_SSL SASL = False, TLS = True SSL SASL = True, TLS = False SASL_PLAINTEXT SASL = False, TLS = False PLAINTEXT 12.1. listenerConfig Listener configuration specified using listenerConfig is prefixed with listener.name. <listener_name>-<port> . For example, sasl.enabled.mechanisms becomes listener.name. <listener_name>-<port> .sasl.enabled.mechanisms . 12.2. secrets Secrets are mounted to /opt/kafka/custom-authn-secrets/custom-listener- <listener_name>-<port> / <secret_name> in the Kafka broker nodes' containers. For example, the mounted secret ( example ) in the example configuration would be located at /opt/kafka/custom-authn-secrets/custom-listener-oauth-bespoke-9093/example . 12.3. Principal builder You can set a custom principal builder in the Kafka cluster configuration. However, the principal builder is subject to the following requirements: The specified principal builder class must exist on the image. Before building your own, check if one already exists. You'll need to rebuild the Streams for Apache Kafka images with the required classes. No other listener is using oauth type authentication. This is because an OAuth listener appends its own principle builder to the Kafka configuration. The specified principal builder is compatible with Streams for Apache Kafka. Custom principal builders must support peer certificates for authentication, as Streams for Apache Kafka uses these to manage the Kafka cluster. Note Kafka's default principal builder class supports the building of principals based on the names of peer certificates. The custom principal builder should provide a principal of type user using the name of the SSL peer certificate. The following example shows a custom principal builder that satisfies the OAuth requirements of Streams for Apache Kafka. Example principal builder for custom OAuth configuration public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException("Cannot use an unverified peer for authentication", e); } } // Create your own KafkaPrincipal here ... } } 12.4. KafkaListenerAuthenticationCustom schema properties The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationCustom type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value custom for the type KafkaListenerAuthenticationCustom . Property Property type Description listenerConfig map Configuration to be used for a specific listener. All values are prefixed with listener.name. <listener_name> . sasl boolean Enable or disable SASL on this listener. secrets GenericSecretSource array Secrets to be mounted to /opt/kafka/custom-authn-secrets/custom-listener- <listener_name>-<port> / <secret_name> . type string Must be custom . | [
"spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example",
"public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException(\"Cannot use an unverified peer for authentication\", e); } } // Create your own KafkaPrincipal here } }"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkalistenerauthenticationcustom-reference |
Chapter 5. Preparing PXE assets for OpenShift Container Platform | Chapter 5. Preparing PXE assets for OpenShift Container Platform Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer. The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements. See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. 5.2. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Note Currently, downloading the Agent-based Installer is not supported on the IBM Z(R) ( s390x ) architecture. The recommended method is by creating PXE assets. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 5.3. Creating the preferred configuration inputs Use this procedure to create the preferred configuration inputs used to create the PXE files. Procedure Install nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Note This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. Create the install-config.yaml file by running the following command: USD cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF 1 Specify the system architecture, valid values are amd64 , arm64 , ppc64le , and s390x . 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 Note When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Optional: Configures the network interface of a host in NMState format. Optional: To create an iPXE script, add the bootArtifactsBaseURL to the agent-config.yaml file: apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL> Where <asset_server_URL> is the URL of the server you will upload the PXE assets to. Additional resources Deploying with dual-stack networking . Configuring the install-config yaml file . See Configuring a three-node cluster to deploy three-node clusters in bare metal environments. About root device hints . NMState state examples . Optional: Creating additional manifest files 5.4. Creating the PXE assets Use the following procedure to create the assets and optional script to implement in your PXE infrastructure. Procedure Create the PXE assets by running the following command: USD openshift-install agent create pxe-files The generated PXE assets and optional iPXE script can be found in the boot-artifacts directory. Example filesystem with PXE assets and optional iPXE script boot-artifacts ββ agent.x86_64-initrd.img ββ agent.x86_64.ipxe ββ agent.x86_64-rootfs.img ββ agent.x86_64-vmlinuz Important The contents of the boot-artifacts directory vary depending on the specified architecture. Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process. Note If you generated an iPXE script, the location of the assets must match the bootArtifactsBaseURL you added to the agent-config.yaml file. 5.5. Manually adding IBM Z agents After creating the PXE assets, you can add IBM Z(R) agents. Only use this procedure for IBM Z(R) clusters. Note Currently ISO boot is not supported on IBM Z(R) ( s390x ) architecture. Therefore, manually adding IBM Z(R) agents is required for Agent-based installations on IBM Z(R). Depending on your IBM Z(R) environment, you can choose from the following options: Adding IBM Z(R) agents with z/VM Adding IBM Z(R) agents with RHEL KVM 5.5.1. Adding IBM Z agents with z/VM Use the following procedure to manually add IBM Z(R) agents with z/VM. Only use this procedure for IBM Z(R) clusters with z/VM. Procedure Create a parameter file for the z/VM guest: Example parameter file rd.neednet=1 \ console=ttysclp0 \ coreos.live.rootfs_url=<rootfs_url> \ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ 2 zfcp.allow_lun_scan=0 \ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 random.trust_cpu=on rd.luks.options=discard \ ignition.firstboot ignition.platform.id=metal \ console=tty1 console=ttyS1,115200n8 \ coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported. 2 For the ip parameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE". 3 The default is 1 . Omit this entry when using an OSA network adapter. 4 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks. 5 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. Leave all other parameters unchanged. Punch the kernel.img , generic.parm , and initrd.img files to the virtual reader of the z/VM guest virtual machine. For more information, see PUNCH (IBM Documentation). Tip You can use the CP PUNCH command or, if you use Linux, the vmur command, to transfer files between two z/VM guest virtual machines. Log in to the conversational monitor system (CMS) on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: USD ipl c For more information, see IPL (IBM Documentation). Additional resources Installing a cluster with z/VM on IBM Z and IBM LinuxONE 5.5.2. Adding IBM Z(R) agents with RHEL KVM Use the following procedure to manually add IBM Z(R) agents with RHEL KVM. Only use this procedure for IBM Z(R) clusters with RHEL KVM. Procedure Boot your RHEL KVM machine. To deploy the virtual server, run the virt-install command with the following parameters: USD virt-install \ --name <vm_name> \ --autostart \ --ram=16384 \ --cpu host \ --vcpus=8 \ --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \ 1 --disk <qcow_image_path> \ --network network:macvtap ,mac=<mac_address> \ --graphics none \ --noautoconsole \ --wait=-1 \ --extra-args "rd.neednet=1 nameserver=<nameserver>" \ --extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \ --extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \ --extra-args "random.trust_cpu=on rd.luks.options=discard" \ --extra-args "ignition.firstboot ignition.platform.id=metal" \ --extra-args "console=tty1 console=ttyS1,115200n8" \ --extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \ --osinfo detect=on,require=off 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. | [
"sudo dnf install /usr/bin/nmstatectl -y",
"mkdir ~/<directory_name>",
"cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5",
"cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF",
"apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL>",
"openshift-install agent create pxe-files",
"boot-artifacts ββ agent.x86_64-initrd.img ββ agent.x86_64.ipxe ββ agent.x86_64-rootfs.img ββ agent.x86_64-vmlinuz",
"rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=<rootfs_url> \\ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \\ 2 zfcp.allow_lun_scan=0 \\ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\"",
"ipl c",
"virt-install --name <vm_name> --autostart --ram=16384 --cpu host --vcpus=8 --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \\ 1 --disk <qcow_image_path> --network network:macvtap ,mac=<mac_address> --graphics none --noautoconsole --wait=-1 --extra-args \"rd.neednet=1 nameserver=<nameserver>\" --extra-args \"ip=<IP>::<nameserver>::<hostname>:enc1:none\" --extra-args \"coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img\" --extra-args \"random.trust_cpu=on rd.luks.options=discard\" --extra-args \"ignition.firstboot ignition.platform.id=metal\" --extra-args \"console=tty1 console=ttyS1,115200n8\" --extra-args \"coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8\" --osinfo detect=on,require=off"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/prepare-pxe-assets-agent |
Chapter 12. Managing bare metal hosts | Chapter 12. Managing bare metal hosts When you install OpenShift Container Platform on a bare metal cluster, you can provision and manage bare metal nodes using machine and machineset custom resources (CRs) for bare metal hosts that exist in the cluster. 12.1. About bare metal hosts and nodes To provision a Red Hat Enterprise Linux CoreOS (RHCOS) bare metal host as a node in your cluster, first create a MachineSet custom resource (CR) object that corresponds to the bare metal host hardware. Bare metal host machine sets describe infrastructure components specific to your configuration. You apply specific Kubernetes labels to these machine sets and then update the infrastructure components to run on only those machines. Machine CR's are created automatically when you scale up the relevant MachineSet containing a metal3.io/autoscale-to-hosts annotation. OpenShift Container Platform uses Machine CR's to provision the bare metal node that corresponds to the host as specified in the MachineSet CR. 12.2. Maintaining bare metal hosts You can maintain the details of the bare metal hosts in your cluster from the OpenShift Container Platform web console. Navigate to Compute Bare Metal Hosts , and select a task from the Actions drop down menu. Here you can manage items such as BMC details, boot MAC address for the host, enable power management, and so on. You can also review the details of the network interfaces and drives for the host. You can move a bare metal host into maintenance mode. When you move a host into maintenance mode, the scheduler moves all managed workloads off the corresponding bare metal node. No new workloads are scheduled while in maintenance mode. You can deprovision a bare metal host in the web console. Deprovisioning a host does the following actions: Annotates the bare metal host CR with cluster.k8s.io/delete-machine: true Scales down the related machine set Note Powering off the host without first moving the daemon set and unmanaged static pods to another node can cause service disruption and loss of data. Additional resources Adding compute machines to bare metal 12.2.1. Adding a bare metal host to the cluster using the web console You can add bare metal hosts to the cluster in the web console. Prerequisites Install an RHCOS cluster on bare metal. Log in as a user with cluster-admin privileges. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New with Dialog . Specify a unique name for the new bare metal host. Set the Boot MAC address . Set the Baseboard Management Console (BMC) Address . Enter the user credentials for the host's baseboard management controller (BMC). Select to power on the host after creation, and select Create . Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machine replicas in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal machine set. 12.2.2. Adding a bare metal host to the cluster using YAML in the web console You can add bare metal hosts to the cluster in the web console using a YAML file that describes the bare metal host. Prerequisites Install a RHCOS compute machine on bare metal infrastructure for use in the cluster. Log in as a user with cluster-admin privileges. Create a Secret CR for the bare metal host. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New from YAML . Copy and paste the below YAML, modifying the relevant fields with the details of your host: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address> 1 credentialsName must reference a valid Secret CR. The baremetal-operator cannot manage the bare metal host without a valid Secret referenced in the credentialsName . For more information about secrets and how to create them, see Understanding secrets . 2 Setting disableCertificateVerification to true disables TLS host validation between the cluster and the baseboard management controller (BMC). Select Create to save the YAML and create the new bare metal host. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machines in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal machine set. 12.2.3. Automatically scaling machines to the number of available bare metal hosts To automatically create the number of Machine objects that matches the number of available BareMetalHost objects, add a metal3.io/autoscale-to-hosts annotation to the MachineSet object. Prerequisites Install RHCOS bare metal compute machines for use in the cluster, and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the machine set that you want to configure for automatic scaling by adding the metal3.io/autoscale-to-hosts annotation. Replace <machineset> with the name of the machine set. USD oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>' Wait for the new scaled machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. 12.2.4. Removing bare metal hosts from the provisioner node In certain circumstances, you might want to temporarily remove bare metal hosts from the provisioner node. For example, during provisioning when a bare metal host reboot is triggered by using the OpenShift Container Platform administration console or as a result of a Machine Config Pool update, OpenShift Container Platform logs into the integrated Dell Remote Access Controller (iDrac) and issues a delete of the job queue. To prevent the management of the number of Machine objects that matches the number of available BareMetalHost objects, add a baremetalhost.metal3.io/detached annotation to the MachineSet object. Note This annotation has an effect for only BareMetalHost objects that are in either Provisioned , ExternallyProvisioned or Ready/Available state. Prerequisites Install RHCOS bare metal compute machines for use in the cluster and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to remove from the provisioner node by adding the baremetalhost.metal3.io/detached annotation. USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached' Wait for the new machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. In the provisioning use case, remove the annotation after the reboot is complete by using the following command: USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-' Additional resources Expanding the cluster MachineHealthChecks on bare metal | [
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/managing-bare-metal-hosts |
Chapter 7. Expanding the cluster | Chapter 7. Expanding the cluster After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites. Note Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media. 7.1. Preparing the bare metal node To expand your cluster, you must provide the node with the relevant IP address. This can be done with a static configuration, or with a DHCP (Dynamic Host Configuration protocol) server. When expanding the cluster using a DHCP server, each node must have a DHCP reservation. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "Optional: Configuring host network interfaces in the install-config.yaml file" in the "Setting up the environment for an OpenShift installation" section for additional details. Preparing the bare metal node requires executing the following procedure from the provisioner node. Procedure Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc USD sudo cp oc /usr/local/bin Power off the bare metal node by using the baseboard management controller (BMC), and ensure it is off. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create base64 strings from the user name and password: USD echo -ne "root" | base64 USD echo -ne "password" | base64 Create a configuration file for the bare metal node. Depending on whether you are using a static configuration or a DHCP server, use one of the following example bmh.yaml files, replacing values in the YAML to match your environment: USD vim bmh.yaml Static configuration bmh.yaml : --- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 8 -hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21 1 To configure the network interface for a newly created node, specify the name of the secret that contains the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Optional: Configuring host network interfaces in the install-config.yaml file" for details on configuring NMState syntax. 2 10 13 16 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 3 Add the NMState YAML syntax to configure the host interfaces. 4 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false as shown: --- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false 5 6 7 8 9 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. 11 12 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 14 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 15 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 17 To skip certificate validation, set disableCertificateVerification to true. 18 19 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 20 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 21 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. DHCP configuration bmh.yaml : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12 1 4 7 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 2 3 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 5 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 6 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 8 To skip certificate validation, set disableCertificateVerification to true. 9 10 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 11 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 12 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. Note If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a host duplicate MAC address" for more information. Create the bare metal node: USD oc -n openshift-machine-api create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Where <num> will be the worker number. Power up and inspect the bare metal node: USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true Note To allow the worker node to join the cluster, scale the machineset object to the number of the BareMetalHost objects. You can scale nodes either manually or automatically. To scale nodes automatically, use the metal3.io/autoscale-to-hosts annotation for machineset . Additional resources See Optional: Configuring host network interfaces in the install-config.yaml file for details on configuring the NMState syntax. See Automatically scaling machines to the number of available bare metal hosts for details on automatically scaling machines. 7.2. Replacing a bare-metal control plane node Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section. Procedure Ensure that the Bare Metal Operator is available: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.14.0 True False False 3d15h Remove the old BareMetalHost and Machine objects: USD oc delete bmh -n openshift-machine-api <host_name> USD oc delete machine -n openshift-machine-api <machine_name> Replace <host_name> with the name of the host and <machine_name> with the name of the machine. The machine name appears under the CONSUMER field. After you remove the BareMetalHost and Machine objects, then the machine controller automatically deletes the Node object. Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF 1 4 6 Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field. 2 Replace <base64_of_uid> with the base64 string of the user name. 3 Replace <base64_of_pwd> with the base64 string of the password. 5 Replace <protocol> with the BMC protocol, such as redfish , redfish-virtualmedia , idrac-virtualmedia , or others. Replace <bmc_ip> with the IP address of the bare metal node's baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. 7 Replace <NIC1_mac_address> with the MAC address of the bare metal node's first NIC. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. View available BareMetalHost objects: USD oc get bmh -n openshift-machine-api Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m There are no MachineSet objects for control plane nodes, so you must create a Machine object instead. You can copy the providerSpec from another control plane Machine object. Create a Machine object: USD cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF 1 2 3 Replace <num> for the control plane number of the bare metal node in the name , labels and annotations fields. To view the BareMetalHost objects, run the following command: USD oc get bmh -A Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m After the RHCOS installation, verify that the BareMetalHost is added to the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.27.6 control-plane-2.example.com available master 141m v1.27.6 control-plane-3.example.com available master 141m v1.27.6 compute-1.example.com available worker 87m v1.27.6 compute-2.example.com available worker 87m v1.27.6 Note After replacement of the new control plane node, the etcd pod running in the new node is in crashloopback status. See "Replacing an unhealthy etcd member" in the Additional resources section for more information. Additional resources Replacing an unhealthy etcd member Backing up etcd Bare metal configuration BMC addressing 7.3. Preparing to deploy with Virtual Media on the baremetal network If the provisioning network is enabled and you want to expand the cluster using Virtual Media on the baremetal network, use the following procedure. Prerequisites There is an existing cluster with a baremetal network and a provisioning network. Procedure Edit the provisioning custom resource (CR) to enable deploying with Virtual Media on the baremetal network: oc edit provisioning apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: "2021-08-05T18:51:50Z" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: "551591" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: "" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: "" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0 1 Add virtualMediaViaExternalNetwork: true to the provisioning CR. If the image URL exists, edit the machineset to use the API VIP address. This step only applies to clusters installed in versions 4.9 or earlier. oc edit machineset apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: "2021-08-05T18:51:52Z" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: "551513" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2 1 Edit the checksum URL to use the API VIP address. 2 Edit the url URL to use the API VIP address. 7.4. Diagnosing a duplicate MAC address when provisioning a new host in the cluster If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host. You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace. Prerequisites Install an OpenShift Container Platform cluster on bare metal. Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following: Get the bare-metal hosts running in the openshift-machine-api namespace: USD oc get bmh -n openshift-machine-api Example output NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name> with the name of the host: USD oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml Example output ... status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error ... 7.5. Provisioning the bare metal node Provisioning the bare metal node requires executing the following procedure from the provisioner node. Procedure Ensure the STATE is available before provisioning the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. NAME STATE ONLINE ERROR AGE openshift-worker available true 34h Get a count of the number of worker nodes. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.27.3 openshift-master-2.openshift.example.com Ready master 30h v1.27.3 openshift-master-3.openshift.example.com Ready master 30h v1.27.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.27.3 Get the compute machine set. USD oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m Increase the number of worker nodes by one. USD oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api Replace <num> with the new number of worker nodes. Replace <machineset> with the name of the compute machine set from the step. Check the status of the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. The STATE changes from ready to provisioning . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true The provisioning status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the state will change to provisioned . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true After provisioning completes, ensure the bare metal node is ready. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.27.3 openshift-master-2.openshift.example.com Ready master 30h v1.27.3 openshift-master-3.openshift.example.com Ready master 30h v1.27.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.27.3 You can also check the kubelet. USD ssh openshift-worker-<num> [kni@openshift-worker-<num>]USD journalctl -fu kubelet | [
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"echo -ne \"root\" | base64",
"echo -ne \"password\" | base64",
"vim bmh.yaml",
"--- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 8 next-hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21",
"--- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12",
"oc -n openshift-machine-api create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.14.0 True False False 3d15h",
"oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF",
"oc get bmh -n openshift-machine-api",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m",
"cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF",
"oc get bmh -A",
"NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.27.6 control-plane-2.example.com available master 141m v1.27.6 control-plane-3.example.com available master 141m v1.27.6 compute-1.example.com available worker 87m v1.27.6 compute-2.example.com available worker 87m v1.27.6",
"edit provisioning",
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0",
"edit machineset",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2",
"oc get bmh -n openshift-machine-api",
"NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering",
"oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml",
"status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE ONLINE ERROR AGE openshift-worker available true 34h",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.27.3 openshift-master-2.openshift.example.com Ready master 30h v1.27.3 openshift-master-3.openshift.example.com Ready master 30h v1.27.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.27.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m",
"oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.27.3 openshift-master-2.openshift.example.com Ready master 30h v1.27.3 openshift-master-3.openshift.example.com Ready master 30h v1.27.3 openshift-worker-0.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-1.openshift.example.com Ready worker 30h v1.27.3 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.27.3",
"ssh openshift-worker-<num>",
"[kni@openshift-worker-<num>]USD journalctl -fu kubelet"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-expanding-the-cluster |
7.236. spice-xpi | 7.236. spice-xpi 7.236.1. RHBA-2013:0459 - spice-xpi bug fix update Updated spice-xpi packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The spice-xpi packages provide the Simple Protocol for Independent Computing Environments (SPICE) extension for Mozilla that allows the SPICE client to be used from a web browser. Bug Fixes BZ# 805602 Previously, spice-xpi did not check port validity. Consequently, if an invalid port number was provided, spice-xpi sent it to the client. With this update, spice-xpi checks validity of provided port numbers, warns about invalid ports, and does not run the client if both ports are invalid. BZ# 810583 Previously, the disconnect() function failed to terminate a SPICE client when invoked. The underlying source code has been modified and disconnect() now works as expected in the described scenario. All users of spice-xpi are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/spice-xpi |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.