title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
2.2. RHEA-2011:0664 - new package: PyPAM | 2.2. RHEA-2011:0664 - new package: PyPAM A new PyPAM package is now available for Red Hat Enterprise Linux 6. PyPAM is a Python module that provides an interface to the pluggable authentication modules (PAM). These bindings allow Python applications to authorize, authenticate, and manage user sessions through the system's PAM configuration. This enhancement update adds the PyPAM package to Red Hat Enterprise Linux 6. (BZ# 667127 ) All users requiring PyPAM should install this newly-released package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/pypam_new |
6.7. Security | 6.7. Security openssl component, BZ# 1022002 The external Advanced Encryption Standard (AES) New Instructions (AES-NI) engine is no longer available in openssl; the engine is now built-in and therefore no longer needs to be manually enabled. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/security_issues |
Chapter 1. Quarkus Security overview | Chapter 1. Quarkus Security overview Quarkus Security is a framework that provides the architecture, multiple authentication and authorization mechanisms, and other tools to build secure and production-quality Java applications. Before building security into your Quarkus applications, learn about the Quarkus Security architecture and the different authentication mechanisms and features you can use. 1.1. Key features of Quarkus Security The Quarkus Security framework provides built-in security authentication mechanisms for Basic, Form-based, and mutual TLS (mTLS) authentication. You can also use other well-known authentication mechanisms , such as OpenID Connect (OIDC). Authentication mechanisms depend on Identity providers to verify the authentication credentials and map them to a SecurityIdentity instance with the username, roles, original authentication credentials, and other attributes. Red Hat build of Quarkus also includes built-in security to allow for role-based access control (RBAC) based on the common security annotations @RolesAllowed , @DenyAll , @PermitAll on REST endpoints, and Contexts and Dependency Injection (CDI) beans. For more information, see the Quarkus Authorization of web endpoints guide. Quarkus Security also supports the following features: Proactive authentication Secure connections with SSL/TLS Cross-origin resource sharing Cross-Site Request Forgery (CSRF) prevention SameSite cookies Secrets engines Secure auto-generated resources by REST Data with Panache Secure serialization Security vulnerability detection and National Vulnerability Database (NVD) registration Quarkus Security is also highly customizable. For more information, see the Quarkus Security tips and tricks guide. 1.2. Getting started with Quarkus Security To get started with security in Quarkus, consider securing your Quarkus application endpoints with the built-in Quarkus Basic authentication and the Jakarta Persistence identity provider and enabling role-based access control. Complete the steps in the Getting started with Security by using Basic authentication and Jakarta Persistence tutorial. After successfully securing your Quarkus application with Basic authentication, you can increase the security further by adding more advanced authentication mechanisms, for example, the Quarkus OpenID Connect (OIDC) authorization code flow mechanism guide. 1.3. Quarkus Security testing For guidance on testing Quarkus Security features and ensuring that your Quarkus applications are securely protected, see the Security testing guide. 1.4. More about security features in Quarkus 1.4.1. Cross-origin resource sharing To make your Quarkus application accessible to another application running on a different domain, you need to configure cross-origin resource sharing (CORS). For more information about the CORS filter Quarkus provides, see the CORS filter section of the Quarkus "Cross-origin resource sharing" guide. 1.4.2. Cross-Site Request Forgery (CSRF) prevention Quarkus Security provides a RESTEasy Reactive filter that can protect your applications against a Cross-Site Request Forgery attack. For more information, see the Quarkus Cross-Site Request Forgery Prevention guide. 1.4.3. SameSite cookies You can add a SameSite cookie property to any of the cookies set by a Quarkus endpoint. For more information, see the SameSite cookies section of the Quarkus "HTTP reference" guide. 1.4.4. Secrets engines You can use secrets engines with Quarkus to store, generate, or encrypt data. Quarkus provides additional extensions in Quarkiverse for securely storing credentials, for example, Quarkus and HashiCorp Vault . 1.5. Secrets in environment properties Quarkus provides support to store secrets in environment properties. For more information, see the Quarkus store secrets in an environment properties file guide. 1.5.1. Secure serialization If your Quarkus Security architecture includes RESTEasy Reactive and Jackson, Quarkus can limit the fields included in JSON serialization based on the configured security. For more information, see the JSON serialization section of the Quarkus "Writing REST services with RESTEasy Reactive" guide. 1.5.2. Secure auto-generated resources by REST Data with Panache If you use the REST Data with Panache extension to auto-generate your resources, you can still use security annotations within the package jakarta.annotation.security . For more information, see the Securing endpoints section of the Quarkus "Generating Jakarta REST resources with Panache" guide. 1.6. Security vulnerability detection Most Quarkus tags get reported in the US National Vulnerability Database (NVD) . For information about security vulnerabilities, see the Security vulnerability detection and reporting in Quarkus guide. 1.7. References Basic authentication Getting started with Security by using Basic authentication and Jakarta Persistence Protect a web application by using OIDC authorization code flow Protect a service application by using OIDC Bearer token authentication | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/security_overview/security-overview |
15.2. Removing Swap Space | 15.2. Removing Swap Space Sometimes it can be prudent to reduce swap space after installation. For example, you have downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space still assigned. It might be advantageous to reduce the amount of swap space to 1 GB, since the larger 2 GB could be wasting disk space. You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or reduce swap space on an existing LVM2 logical volume. 15.2.1. Reducing Swap on an LVM2 Logical Volume To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you want to reduce): Procedure 15.3. Reducing an LVM2 Swap Logical Volume Disable swapping for the associated logical volume: Reduce the LVM2 logical volume by 512 MB: Format the new swap space: Activate swap on the logical volume: To test if the swap logical volume was successfully reduced, inspect active swap space: 15.2.2. Removing an LVM2 Logical Volume for Swap To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove): Procedure 15.4. Remove a Swap Volume Group Disable swapping for the associated logical volume: Remove the LVM2 logical volume: Remove the following associated entry from the /etc/fstab file: Regenerate mount units so that your system registers the new configuration: Remove all references to the removed swap storage from the /etc/default/grub file: Rebuild the grub configuration: on BIOS-based machines, run: on UEFI-based machines, run: To test if the logical volume was successfully removed, inspect active swap space: 15.2.3. Removing a Swap File To remove a swap file: Procedure 15.5. Remove a Swap File At a shell prompt, execute the following command to disable the swap file (where /swapfile is the swap file): Remove its entry from the /etc/fstab file. Regenerate mount units so that your system registers the new configuration: Remove the actual file: | [
"swapoff -v /dev/VolGroup00/LogVol01",
"lvreduce /dev/VolGroup00/LogVol01 -L -512M",
"mkswap /dev/VolGroup00/LogVol01",
"swapon -v /dev/VolGroup00/LogVol01",
"cat /proc/swaps free -h",
"swapoff -v /dev/VolGroup00/LogVol02",
"lvremove /dev/VolGroup00/LogVol02",
"/dev/VolGroup00/LogVol02 swap swap defaults 0 0",
"systemctl daemon-reload",
"vi /etc/default/grub",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"cat /proc/swaps free -h",
"swapoff -v /swapfile",
"systemctl daemon-reload",
"rm /swapfile"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-swap-removing |
25.15. Scanning iSCSI Interconnects | 25.15. Scanning iSCSI Interconnects For iSCSI, if the targets send an iSCSI async event indicating new storage is added, then the scan is done automatically. However, if the targets do not send an iSCSI async event, you need to manually scan them using the iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and the --portal values. If your device model supports only a single logical unit and portal per target, use iscsiadm to issue a sendtargets command to the host, as in: The output will appear in the following format: Example 25.11. Using iscsiadm to issue a sendtargets Command For example, on a target with a proper_target_name of iqn.1992-08.com.netapp:sn.33615311 and a target_IP:port of 10.15.85.19:3260 , the output may appear as: In this example, the target has two portals, each using target_ip:port s of 10.15.84.19:3260 and 10.15.85.19:3260 . To see which iface configuration will be used for each session, add the -P 1 option. This option will print also session information in tree format, as in: Example 25.12. View iface Configuration For example, with iscsiadm -m discovery -t sendtargets -p 10.15.85.19:3260 -P 1 , the output may appear as: This means that the target iqn.1992-08.com.netapp:sn.33615311 will use iface2 as its iface configuration. With some device models a single target may have multiple logical units and portals. In this case, issue a sendtargets command to the host first to find new portals on the target. Then, rescan the existing sessions using: You can also rescan a specific session by specifying the session's SID value, as in: If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to find new portals for each target. Rescan existing sessions to discover new logical units on existing sessions using the --rescan option. Important The sendtargets command used to retrieve --targetname and --portal values overwrites the contents of the /var/lib/iscsi/nodes database. This database will then be repopulated using the settings in /etc/iscsi/iscsid.conf . However, this will not occur if a session is currently logged in and in use. To safely add new targets/portals or delete old ones, use the -o new or -o delete options, respectively. For example, to add new targets/portals without overwriting /var/lib/iscsi/nodes , use the following command: To delete /var/lib/iscsi/nodes entries that the target did not display during discovery, use: You can also perform both tasks simultaneously, as in: The sendtargets command will yield the following output: Example 25.13. Output of the sendtargets Command For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1 as your target_name , the output should appear similar to the following: Note that proper_target_name and ip:port,target_portal_group_tag are identical to the values of the same name in Section 25.7.1, "iSCSI API" . At this point, you now have the proper --targetname and --portal values needed to manually scan for iSCSI devices. To do so, run the following command: Example 25.14. Full iscsiadm Command Using our example (where proper_target_name is equallogic-iscsi1 ), the full command would be: [7] For information on how to retrieve a session's SID value, refer to Section 25.7.1, "iSCSI API" . [8] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines - preceded by the backslash (\) - should be treated as one command, sans backslashes. | [
"iscsiadm -m discovery -t sendtargets -p target_IP:port [5]",
"target_IP:port , target_portal_group_tag proper_target_name",
"10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311",
"Target: proper_target_name Portal: target_IP:port , target_portal_group_tag Iface Name: iface_name",
"Target: iqn.1992-08.com.netapp:sn.33615311 Portal: 10.15.84.19:3260,2 Iface Name: iface2 Portal: 10.15.85.19:3260,3 Iface Name: iface2",
"iscsiadm -m session --rescan",
"iscsiadm -m session -r SID --rescan [7]",
"iscsiadm -m discovery -t st -p target_IP -o new",
"iscsiadm -m discovery -t st -p target_IP -o delete",
"iscsiadm -m discovery -t st -p target_IP -o delete -o new",
"ip:port,target_portal_group_tag proper_target_name",
"10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1",
"iscsiadm --mode node --targetname proper_target_name --portal ip:port,target_portal_group_tag \\ --login [8]",
"iscsiadm --mode node --targetname \\ iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \\ --portal 10.16.41.155:3260,0 --login [8]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/iscsi-scanning-interconnects |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, a public or private cloud, or your local host. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Container Platform images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, which serves most of the container images and Operators to OpenShift Container Platform clusters. OpenShift image registry OpenShift image registry is the registry provided by OpenShift Container Platform to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and image streams. Additional resources Image Registry Operator in OpenShift Container Platform 1.3. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon image stream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All image streams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the image streams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/registry/registry-overview-1 |
Chapter 16. Servers and Services | Chapter 16. Servers and Services chrony rebased to version 3.1 The chrony package has been upgraded to upstream version 3.1, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Added support for software and hardware timestamping for improved accuracy (sub-microsecond accuracy may be possible). Improved accuracy with asymmetric network jitter. Added support for interleaved mode. Added support for configuration and monitoring over Unix domain socket to replace authentication with command key (remote configuration is no longer possible). Improved automatic replacement of servers. Added orphan mode compatible with the ntpd daemon. Added response rate limiting for NTP servers. Added detailed manual pages, which replace the documentation in the info format. (BZ#1387223) linuxptp rebased to version 1.8 The linuxptp packages have been upgraded to upstream version 1.8, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Added support for hybrid end-to-end (E2E) delay measurements using unicast messages to reduce network traffic in large networks. Added support for running a boundary clock (BC) using independent Precision Time Protocol (PTP) hardware clocks. Added options to configure Time to Live (TTL) and Differentiated Services Code Point (DSCP) of PTP messages. (BZ# 1359311 ) tuned rebased to version 2.8.0 The tuned packages have been upgraded to upstream version 2.8.0, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: CPU partitioning profile has been added. Support for cores isolation has been added. Support for initrd overlays has been added. Inheritance has been improved. RegExp device matching based on the udev device manager has been implemented. (BZ# 1388454 , BZ# 1395855 , BZ# 1395899 , BZ# 1408308 , BZ# 1394965 ) logrotate now uses /var/lib/logrotate/logrotate.status as the default state file Previously, the logrotate cron job used a modified path to the logrotate state file. Consequently, the path used by the cron job did not match the default state file path used by logrotate itself. To prevent confusion, the default state file path used by logrotate has been changed to match the state file path used by logrotate cron job . As a result, logrotate now uses /var/lib/logrotate/logrotate.status as the default state file path in both scenarios. (BZ# 1381719 ) rsyslog rebased to version 8.24.0 The rsyslog utility has been rebased to upstream version 8.24.0, which includes numerous enhancements, new features and bug fixes. Notable improvements include: A new core engine has been implemented, offering faster message processing. Speed and stability when handling data in the JSON format have been improved. The RainerScript configuration format has been selected as default and improved with more options. A new mmexternal module for manipulation of messages inside rsyslog using external applications has been added. The omprog module has received improvements for better communication with external binaries. Modules imrelp and omrelp now support encrypted transmission using the TLS protocol. The imuxsock module now supports rule sets for individual sockets, which override the global rule set. When the imuxsock module is used, rate limiting messages now include PID of the process that causes the rate limiting. The TCP server error messages now include the IP address of the remote host. The imjournal module no longer stops receiving logs after switching to the persistent journald configuration. Logging to the runtime journal no longer completely stops after a reboot when the machine's clock was set to an earlier time. Previously, when the logrotate utility with copytruncate option was rotating a log file, the imfile module might not have read all of the log messages from the file being rotated. As a consequence, these log messages were lost. The imfile module has been extended to handle this situation. As a consquence, messages are no longer lost when logrotate copytruncate is used on log files. Customers using custom modules are advised to update their modules for the current rsyslog version. See also the Deprecated Functionality chapter for information about deprecated rsyslog options. (BZ# 1313490 , BZ#1174345, BZ# 1053641 , BZ#1196230, BZ#1326216, BZ# 1088021 , BZ# 1419228 , BZ# 1133687 ) New cache configuration options for mod_nss This update adds new options to control cahing of OCSP responses to the mod_nss module. The new options allow the user to control: Time to wait for OCSP responses Size of the OCSP cache Minimum and maximum duration for an item's presence in cache, including not caching at all (BZ# 1392582 ) Database and prefix options have been removed from nss_pcache The nss_pcache pin-caching service no longer shares the Network Security Services (NSS) database of the mod_nss Apache module because nss_pcache does not need access to the tokens. The options for the NSS database and the prefix have been removed and are now handled automatically by mod_nss . (BZ#1382102) New package: libfastjson This update introduces the libfastjson library as a replacement of the json-c library for rsyslog . The limited feature set of libfastjson allows for greatly improved performance compared to json-c . (BZ#1395145) tuned now supports initrd overlays tuned now supports initrd overlays, which can extend default (Dracut) initrd images. It is supported by the bootloader plugin. The example shows typical usage in the Tuned profile: This adds the content of the overlay.img directory to the current initrd when the profile is activated. (BZ# 1414098 ) openwsman now supports disabling of particular SSL protocols Previously, there was no way to disable particular SSL protocols with the openwsman utility. A new configuration file option for a list of disabled protocols has been added. As a result, it is now possible to disable particular SSL protocols through the openwsman configuration file. (BZ#1190689) rear rebased to version 2.0 Updated rear packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 7. Notable changes include: The Cyclic Redundancy Check (CRC) feature is now enabled by default on the XFS file systems. Previously, rear ignored this change in behavior, and formatted the /boot partition with an incompatible UUID flag. This caused the recovery process to fail. With this rebase, rear checks for the CRC feature, and properly preserves UUID during recovery. Support for the GRUB and GRUB2 boot loaders for IBM Power Systems architecture has been added. Linux capabilities are now preserved if the directive NETFS_RESTORE_CAPABILITIES is set to the y option in the /usr/share/rear/conf/default.conf configuration file. CIFS credentials are now preserved in rescue image. GRUB_SUPERUSER and GRUB_RESCUE_PASSWORD directives have been dropped to avoid possible unexpected behaviour change of the GRUB2 bootloader in the currently running system. Documentation has been improved. Creation of multiple backups have been enabled. (BZ# 1355667 ) python-tornado rebased to version 4.2.1 The python-tornado package has been upgraded to upstream version 4.2.1, which provides a number of bug fixes and new features over the version. Notable changes include: A new tornado.netutil.Resolver class, which provides an asynchronous interface to DNS resolution A new tornado.tcpclient module, which creates TCP connections with non-blocking DNS, SSL handshaking, and support for IPv6 The IOLoop.instance() function is now thread-safe Logging has been improved; low-level logs are less frequent; Tornado uses its own logger instead of the root logger, which enables more detailed configuration Multiple reference cycles have been separated within python-tornado , enabling more efficient garbage collection on CPython Coroutines are now faster and are used extensively within Tornado . (BZ# 1158617 ) | [
"[bootloader] initrd_add_dir=USD{i:PROFILE_DIR}/overlay.img"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_servers_and_services |
2. Related Documentation | 2. Related Documentation For more information about using Red Hat Enterprise Linux, refer to the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation of Red Hat Enterprise Linux. Red Hat Enterprise Linux Introduction to System Administration - Provides introductory information for new Red Hat Enterprise Linux system administrators. Red Hat Enterprise Linux System Administration Guide - Provides more detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. Red Hat Enterprise Linux Reference Guide - Provides detailed information suited for more experienced users to reference when needed, as opposed to step-by-step instructions. Red Hat Enterprise Linux Security Guide - Details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux, refer to the following resources: Red Hat Cluster Suite Overview - Provides a high level overview of the Red Hat Cluster Suite. Configuring and Managing a Red Hat Cluster - Provides information about installing, configuring and managing Red Hat Cluster components. Global File System: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). LVM Administrator's Guide: Configuration and Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Using GNBD with Global File System - Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS. Linux Virtual Server Administration - Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS). Red Hat Cluster Suite Release Notes - Provides information about the current release of Red Hat Cluster Suite. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML and PDF versions online at the following location: http://www.redhat.com/docs | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/related_documentation-mpio |
Deploying different types of servers | Deploying different types of servers Red Hat Enterprise Linux 8 Setting up and configuring web servers and reverse proxies, network file services, database servers, mail transport agents, and printers Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/index |
Chapter 2. Differences between java and alt-java | Chapter 2. Differences between java and alt-java Similarities exist between alt-java and java binaries, with the exception of the SSB mitigation. Although the SBB mitigation patch exists only for x86-64 architecture, Intel and AMD, the alt-java exists on all architectures. For non-x86 architectures, the alt-java binary is identical to java binary, except alt-java has no patches. Additional resources For more information about similarities between alt-java and java , see RH1750419 in the Red Hat Bugzilla documentation. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_alt-java/diff-java-and-altjava |
19.4. mount Command References | 19.4. mount Command References The following resources provide an in-depth documentation on the subject. Manual Page Documentation man 8 mount : The manual page for the mount command that provides a full documentation on its usage. man 8 umount : The manual page for the umount command that provides a full documentation on its usage. man 8 findmnt : The manual page for the findmnt command that provides a full documentation on its usage. man 5 fstab : The manual page providing a thorough description of the /etc/fstab file format. Useful Websites Shared subtrees - An LWN article covering the concept of shared subtrees. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/sect-Using_the_mount_Command-Additional_Resources |
Chapter 4. BuildLog [build.openshift.io/v1] | Chapter 4. BuildLog [build.openshift.io/v1] Description BuildLog is the (unused) resource associated with the build log redirector Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 4.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/log GET : read log of the specified Build 4.2.1. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/log Table 4.1. Global path parameters Parameter Type Description name string name of the BuildLog HTTP method GET Description read log of the specified Build Table 4.2. HTTP responses HTTP code Reponse body 200 - OK BuildLog schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/buildlog-build-openshift-io-v1 |
Chapter 2. Tapset Development Guidelines | Chapter 2. Tapset Development Guidelines This chapter describes the upstream guidelines on proper tapset documentation. It also contains information on how to properly document your tapsets, to ensure that they are properly defined in this guide. 2.1. Writing Good Tapsets The first step to writing good tapsets is to create a simple model of your subject area. For example, a model of the process subsystem might include the following: Key Data process ID parent process ID process group ID State Transitions forked exec'd running stopped terminated Note Both lists are examples, and are not meant to represent a complete list. Use your subsystem expertise to find probe points (function entries and exits) that expose the elements of the model, then define probe aliases for those points. Be aware that some state transitions can occur in more than one place. In those cases, an alias can place a probe in multiple locations. For example, process execs can occur in either the do_execve() or the compat_do_execve() functions. The following alias inserts probes at the beginning of those functions: Try to place probes on stable interfaces (i.e., functions that are unlikely to change at the interface level) whenever possible. This will make the tapset less likely to break due to kernel changes. Where kernel version or architecture dependencies are unavoidable, use preprocessor conditionals (see the stap(1) man page for details). Fill in the probe bodies with the key data available at the probe points. Function entry probes can access the entry parameters specified to the function, while exit probes can access the entry parameters and the return value. Convert the data into meaningful forms where appropriate (e.g., bytes to kilobytes, state values to strings, etc). You may need to use auxiliary functions to access or convert some of the data. Auxiliary functions often use embedded C to do things that cannot be done in the SystemTap language, like access structure fields in some contexts, follow linked lists, etc. You can use auxiliary functions defined in other tapsets or write your own. In the following example, copy_process() returns a pointer to the task_struct for the new process. Note that the process ID of the new process is retrieved by calling task_pid() and passing it the task_struct pointer. In this case, the auxiliary function is an embedded C function defined in task.stp . It is not advisable to write probes for every function. Most SystemTap users will not need or understand them. Keep your tapsets simple and high-level. | [
"probe kprocess.exec = kernel.function(\"do_execve\"), kernel.function(\"compat_do_execve\") { probe body }",
"probe kprocess.create = kernel.function(\"copy_process\").return { task = USDreturn new_pid = task_pid(task) }"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/tapset_dev_guide |
Chapter 144. KafkaNodePool schema reference | Chapter 144. KafkaNodePool schema reference Property Description spec The specification of the KafkaNodePool. KafkaNodePoolSpec status The status of the KafkaNodePool. KafkaNodePoolStatus | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkanodepool-reference |
Chapter 1. The company single sign-on feature | Chapter 1. The company single sign-on feature The company SSO feature integrates your company SSO with Red Hat SSO. This integration allows existing Red Hat users to authenticate to Red Hat with their company SSO credentials. Note Company single sign-on is not a self-service feature. Contact your Red Hat account team to learn more about how to enable this service for your company. 1.1. What is company single sign-on? Company single sign-on is an integration between the Red Hat single sign-on system and your organization's identity provider (IdP). This type of integration is commonly known as "3rd party IdP" or "federated IdP." It enables users in your organization with existing Red Hat logins to sign into Red Hat services and applications that use sso.redhat.com for authentication, such as Customer Portal , Hybrid Cloud Console , and training-lms.redhat.com using their company SSO login credentials - the same credentials they use to access their company's internal apps and resources. Any Red Hat website, app, or service using sso.redhat.com for authentication is accessible through company single sign-on integration. 1.2. Benefits of the Red Hat company single sign-on integration Organization Administrators can use this feature for compliance and security reasons because authentication security protocols for Red Hat services can be managed directly by the organization by means of the authentication requirements of its own single sign-on system. Using the company single sign-on feature provides a better authentication user experience for end users. End users themselves can maintain one less set of login credentials. Currently, company single sign-on integration has the following scope: Link one company IdP with one Red Hat organization account. Link one company user identity with one Red Hat user identity. Use corporate SSO/IdP to authenticate to the Red Hat Customer Portal or any Red Hat application with a web-based authentication flow which uses sso.redhat.com . OpenID Connect (OIDC) is supported. Security Assertion Markup Language (SAML) is supported. 1.3. Limitations of the Red Hat company single sign-on integration Some Red Hat services are not compatible with single sign-on integration. This means that you can revoke a user's corporate IdP credentials, but the username and password can still be used to authenticate to some Red Hat services. To completely remove a user's access to all Red Hat services, you must use the user management tool to deactivate the user account. A deactivated account can no longer be used to access Red Hat services. User management is available by clicking your account avatar to open the account information page. You must be an Organization Administrator to use the user management tools. Users must be created through currently supported methods to take advantage of company single sign-on integration. Company single sign-on integration does not support auto-registration of users. Users without accounts in the customer IdP will not be able to authenticate. For example, this can affect vendor relationships where today the vendor user has a Red Hat login within the customer's Red Hat company account. Once company single sign-on is enabled, if the customer is not willing or able to allow the vendor user to have an account in the customer IdP, the vendor user will no longer be able to log in. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/using_company_single_sign-on_integration/con-ciam-user-about_company-single-sign-on |
Chapter 1. Creating a GitHub application for RHTAP | Chapter 1. Creating a GitHub application for RHTAP Creating a GitHub application for RHTAP allows developers to authenticate to Red Hat Developer Hub, which is the user interface (UI) where they can use RHTAP. This GitHub application also allows RHTAP to access developer's source code that is hosted on GitHub. Keep in mind that you must create and install the new application in a GitHub organization that you own and want to use for your instance of Red Hat Trusted Application Pipeline. RHTAP can subsequently create new repositories within that organization, to serve as the source code for the applications it builds. Prerequisites Ownership of a GitHub organization Procedure Login to GitHub and go to your organizations ( Settings > Organizations ). Click on an organization that you own and want to use for this instance of RHTAP. Or you can select New organization to create a new organization. In the organization context, navigate to the GitHub Apps page ( Settings > Developer settings > GitHub Apps ). Near the top banner, on the right side of the page, select New GitHub App . If prompted, authenticate as needed. In the GitHub App name field, enter a unique name. In the Homepage URL field, enter a placeholder value, for example, https://www.placeholder.com . In the Callback URL field, enter a placeholder value. You can use the same placeholder value, for example, https://www.placeholder.com . In the Webhook URL field, enter a placeholder value. You can use the same placeholder value, for example, https://www.placeholder.com . Also, ensure that the Active checkbox is checked (GitHub should do this by default). Create a new file on your local system, in which you save several values that you need for later steps in the installation process. When you enter values in this file, make sure to label them, so you can remember what each value is later on. In your CLI, generate a secret, then label and save it in ~/install_values.txt . If you do not have OpenSSL, you can follow the download instructions . Important Be sure to save the output of this command! In GitHub, in the Webook secret field, enter the output of the last command. Under Repository permissions , set the following permissions: Administration: Read and write Checks: Read and write Contents: Read and write Issues: Read and write Metadata: Read-only (this should already be set correctly, but verify its value) Pull requests: Read and write Under Organization permissions , set the following permissions: Members: Read-only Plan: Read-only Under Subscribe to events , select the following subscriptions: Check run Check suite Commit comment Issue comment Pull request Push Under Where can this GitHub App be installed? select Any account . Click Create GitHub App . You should then see the Developer Settings page. Retrieve the Client ID and Application ID. Label and save them in your ~install_values.txt . Important The two steps explain how to gather a client secret and a private key. You must save the client secret and private key, and keep them accessible, to complete the installation process for RHTAP! On your new application's page, to Client secrets , select Generate a new client secret . Label and save the client secret, in ~/install_values.txt . On the same page in GitHub, under Private keys , select the Generate a private key button. Your system downloads a private-key file, which contains the private key. Label and save the content of the private key file in ~/install_values.txt . The private key should start with -----BEGIN RSA PRIVATE KEY----- , and end with -----END RSA PRIVATE KEY----- . Still on the same page in GitHub, from the tabs on the left-hand side, select Install App . Use the green Install button to the name of your organization. When prompted, select All repositories , so RHTAP can create new repositories in your organization. Click the green Install button. Additional resources The procedure in this document is based on the Pipelines as Code documentation for creating a GitHub application. | [
"touch ~/install_values.txt",
"openssl rand -hex 20 >> ~/install_values.txt"
] | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/installing_red_hat_trusted_application_pipeline/creating-a-github-application |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: For simple comments on specific passages, make sure you are viewing the documentation in the HTML format. Highlight the part of text that you want to comment on. Then, click the Add Feedback pop-up that appears below the highlighted text, and follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/proc_providing-feedback-on-red-hat-documentation_system-management-using-the-rhel-7-web-console |
Chapter 9. Authentication and authorization for hosted control planes | Chapter 9. Authentication and authorization for hosted control planes The OpenShift Container Platform control plane includes a built-in OAuth server. You can obtain OAuth access tokens to authenticate to the OpenShift Container Platform API. After you create your hosted cluster, you can configure OAuth by specifying an identity provider. 9.1. Configuring the OAuth server for a hosted cluster by using the CLI You can configure the internal OAuth server for your hosted cluster by using an OpenID Connect identity provider ( oidc ). You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You created your hosted cluster. Procedure Edit the HostedCluster custom resource (CR) on the hosting cluster by running the following command: USD oc edit <hosted_cluster_name> -n <hosted_cluster_namespace> Add the OAuth configuration in the HostedCluster CR by using the following example: apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID 1 Specifies your hosted cluster name. 2 Specifies your hosted cluster namespace. 3 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 4 Defines a list of attributes to use as the email address. 5 Defines a list of attributes to use as a display name. 6 Defines a list of attributes to use as a preferred user name. 7 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 8 Defines a secret of a client registered with the OpenID provider. 9 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 10 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Save the file to apply the changes. 9.2. Configuring the OAuth server for a hosted cluster by using the web console You can configure the internal OAuth server for your hosted cluster by using the OpenShift Container Platform web console. You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You logged in as a user with cluster-admin privileges. You created your hosted cluster. Procedure Navigate to Home API Explorer . Use the Filter by kind box to search for your HostedCluster resource. Click the HostedCluster resource that you want to edit. Click the Instances tab. Click the Options menu to your hosted cluster name entry and click Edit HostedCluster . Add the OAuth configuration in the YAML file: spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID 1 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 2 Defines a list of attributes to use as the email address. 3 Defines a list of attributes to use as a display name. 4 Defines a list of attributes to use as a preferred user name. 5 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 6 Defines a secret of a client registered with the OpenID provider. 7 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 8 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Click Save . Additional resources To know more about supported identity providers, see "Understanding identity provider configuration" in Authentication and authorization . 9.3. Assigning components IAM roles by using the CCO in a hosted cluster on AWS You can assign components IAM roles that provide short-term, limited-privilege security credentials by using the Cloud Credential Operator (CCO) in hosted clusters on Amazon Web Services (AWS). By default, the CCO runs in a hosted control plane. Note The CCO supports a manual mode only for hosted clusters on AWS. By default, hosted clusters are configured in a manual mode. The management cluster might use modes other than manual. 9.4. Verifying the CCO installation in a hosted cluster on AWS You can verify that the Cloud Credential Operator (CCO) is running correctly in your hosted control plane. Prerequisites You configured the hosted cluster on Amazon Web Services (AWS). Procedure Verify that the CCO is configured in a manual mode in your hosted cluster by running the following command: USD oc get cloudcredentials <hosted_cluster_name> \ -n <hosted_cluster_namespace> \ -o=jsonpath={.spec.credentialsMode} Expected output Manual Verify that the value for the serviceAccountIssuer resource is not empty by running the following command: USD oc get authentication cluster --kubeconfig <hosted_cluster_name>.kubeconfig \ -o jsonpath --template '{.spec.serviceAccountIssuer }' Example output https://aos-hypershift-ci-oidc-29999.s3.us-east-2.amazonaws.com/hypershift-ci-29999 9.5. Enabling Operators to support CCO-based workflows with AWS STS As an Operator author designing your project to run on Operator Lifecycle Manager (OLM), you can enable your Operator to authenticate against AWS on STS-enabled OpenShift Container Platform clusters by customizing your project to support the Cloud Credential Operator (CCO). With this method, the Operator is responsible for and requires RBAC permissions for creating the CredentialsRequest object and reading the resulting Secret object. Note By default, pods related to the Operator deployment mount a serviceAccountToken volume so that the service account token can be referenced in the resulting Secret object. Prerequisities OpenShift Container Platform 4.14 or later Cluster in STS mode OLM-based Operator project Procedure Update your Operator project's ClusterServiceVersion (CSV) object: Ensure your Operator has RBAC permission to create CredentialsRequests objects: Example 9.1. Example clusterPermissions list # ... install: spec: clusterPermissions: - rules: - apiGroups: - "cloudcredential.openshift.io" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch Add the following annotation to claim support for this method of CCO-based workflow with AWS STS: # ... metadata: annotations: features.operators.openshift.io/token-auth-aws: "true" Update your Operator project code: Get the role ARN from the environment variable set on the pod by the Subscription object. For example: // Get ENV var roleARN := os.Getenv("ROLEARN") setupLog.Info("getting role ARN", "role ARN = ", roleARN) webIdentityTokenPath := "/var/run/secrets/openshift/serviceaccount/token" Ensure you have a CredentialsRequest object ready to be patched and applied. For example: Example 9.2. Example CredentialsRequest object creation import ( minterv1 "github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ "s3:*", }, Effect: "Allow", Resource: "arn:aws:s3:*:*:*", }, }, STSIAMRoleARN: "<role_arn>", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = "<credential_request_name>" namespace = "<namespace_name>" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: "openshift-cloud-credential-operator", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: "<secret_name>", Namespace: namespace, }, ServiceAccountNames: []string{ "<service_account_name>", }, CloudTokenPath: "", }, } Alternatively, if you are starting from a CredentialsRequest object in YAML form (for example, as part of your Operator project code), you can handle it differently: Example 9.3. Example CredentialsRequest object creation in YAML form // CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:"apiVersion"` Kind string `yaml:"kind"` Metadata struct { Name string `yaml:"name"` Namespace string `yaml:"namespace"` } `yaml:"metadata"` Spec struct { SecretRef struct { Name string `yaml:"name"` Namespace string `yaml:"namespace"` } `yaml:"secretRef"` ProviderSpec struct { APIVersion string `yaml:"apiVersion"` Kind string `yaml:"kind"` StatementEntries []struct { Effect string `yaml:"effect"` Action []string `yaml:"action"` Resource string `yaml:"resource"` } `yaml:"statementEntries"` STSIAMRoleARN string `yaml:"stsIAMRoleARN"` } `yaml:"providerSpec"` // added new field CloudTokenPath string `yaml:"cloudTokenPath"` } `yaml:"spec"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil } Note Adding a CredentialsRequest object to the Operator bundle is not currently supported. Add the role ARN and web identity token path to the credentials request and apply it during Operator initialization: Example 9.4. Example applying CredentialsRequest object during Operator initialization // apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, "unable to create CredRequest") os.Exit(1) } } Ensure your Operator can wait for a Secret object to show up from the CCO, as shown in the following example, which is called along with the other items you are reconciling in your Operator: Example 9.5. Example wait for Secret object // WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 "k8s.io/api/core/v1" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf("timed out waiting for secret %s in namespace %s", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } } 1 The timeout value is based on an estimate of how fast the CCO might detect an added CredentialsRequest object and generate a Secret object. You might consider lowering the time or creating custom feedback for cluster administrators that could be wondering why the Operator is not yet accessing the cloud resources. Set up the AWS configuration by reading the secret created by the CCO from the credentials request and creating the AWS config file containing the data from that secret: Example 9.6. Example AWS configuration creation func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data["credentials"]) > 0: data = secret.Data["credentials"] default: return "", errors.New("invalid secret for aws credentials") } f, err := ioutil.TempFile("", "aws-shared-credentials") if err != nil { return "", errors.Wrap(err, "failed to create file for shared credentials") } defer f.Close() if _, err := f.Write(data); err != nil { return "", errors.Wrapf(err, "failed to write credentials to %s", f.Name()) } return f.Name(), nil } Important The secret is assumed to exist, but your Operator code should wait and retry when using this secret to give time to the CCO to create the secret. Additionally, the wait period should eventually time out and warn users that the OpenShift Container Platform cluster version, and therefore the CCO, might be an earlier version that does not support the CredentialsRequest object workflow with STS detection. In such cases, instruct users that they must add a secret by using another method. Configure the AWS SDK session, for example: Example 9.7. Example AWS SDK session configuration sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, } Additional resources Cluster Operators reference page for the Cloud Credential Operator | [
"oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID",
"spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID",
"oc get cloudcredentials <hosted_cluster_name> -n <hosted_cluster_namespace> -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get authentication cluster --kubeconfig <hosted_cluster_name>.kubeconfig -o jsonpath --template '{.spec.serviceAccountIssuer }'",
"https://aos-hypershift-ci-oidc-29999.s3.us-east-2.amazonaws.com/hypershift-ci-29999",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/authentication-and-authorization-for-hosted-control-planes |
Chapter 9. Connecting to an instance | Chapter 9. Connecting to an instance You can access an instance from a location external to the cloud by using a remote shell such as SSH or WinRM, when you have allowed the protocol in the instance security group rules. You can also connect directly to the console of an instance, so that you can debug even if the network connection fails. Note If you did not provide a key pair to the instance, or allocate a security group to the instance, you can access the instance only from inside the cloud by using VNC. You cannot ping the instance. 9.1. Accessing an instance console You can connect directly to the VNC console for an instance by entering the VNC console URL in a browser. Procedure To display the VNC console URL for an instance, enter the following command: To connect directly to the VNC console, enter the displayed URL in a browser. 9.2. Logging in to an instance You can log in to public instances remotely. Prerequisites You have the key pair certificate for the instance. The certificate is downloaded when the key pair is created. If you did not create the key pair yourself, ask your administrator. The instance is configured as a public instance. For more information on the requirements of a public instance, see Providing public access to an instance . You have a cloud user account. Procedure Retrieve the floating IP address of the instance you want to log in to: Replace <instance> with the name or ID of the instance that you want to connect to. Use the automatically created cloud-user account to log in to your instance: Replace <keypair> with the name of the key pair. Replace <floating_ip> with the floating IP address of the instance. Tip You can use the following command to log in to an instance without the floating IP address: Replace <keypair> with the name of the key pair. Replace <instance> with the name or ID of the instance that you want to connect to. | [
"openstack console url show <vm_name> +-------+------------------------------------------------------+ | Field | Value | +-------+------------------------------------------------------+ | type | novnc | | url | http://172.25.250.50:6080/vnc_auto.html?token= | | | 962dfd71-f047-43d3-89a5-13cb88261eb9 | +-------+-------------------------------------------------------+",
"openstack server show <instance>",
"ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD",
"openstack server ssh --login cloud-user --identity ~/.ssh/<keypair>.pem --private <instance>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/assembly_connecting-to-an-instance_instances |
Chapter 8. Supported components | Chapter 8. Supported components For a list of component versions that are supported in this release of Red Hat JBoss Web Server, see the JBoss Web Server Component Details page. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/supported_components |
Chapter 9. Configuring certificate mapping rules in Identity Management | Chapter 9. Configuring certificate mapping rules in Identity Management Certificate mapping rules are a convenient way of allowing users to authenticate using certificates in scenarios when the Identity Management (IdM) administrator does not have access to certain users' certificates. This is typically because the certificates have been issued by an external certificate authority. 9.1. Certificate mapping rules for configuring authentication You might need to configure certificate mapping rules in the following scenarios: Certificates have been issued by the Certificate System of the Active Directory (AD) with which the IdM domain is in a trust relationship. Certificates have been issued by an external certificate authority. The IdM environment is large with many users using smart cards. In this case, adding full certificates can be complicated. The subject and issuer are predictable in most scenarios and therefore easier to add ahead of time than the full certificate. As a system administrator, you can create a certificate mapping rule and add certificate mapping data to a user entry even before a certificate is issued to a particular user. Once the certificate is issued, the user can log in using the certificate even though the full certificate has not yet been uploaded to the user entry. In addition, as certificates are renewed at regular intervals, certificate mapping rules reduce administrative overhead. When a user's certificate is renewed, the administrator does not have to update the user entry. For example, if the mapping is based on the Subject and Issuer values, and if the new certificate has the same subject and issuer as the old one, the mapping still applies. If, in contrast, the full certificate was used, then the administrator would have to upload the new certificate to the user entry to replace the old one. To set up certificate mapping: An administrator has to load the certificate mapping data or the full certificate into a user account. An administrator has to create a certificate mapping rule to allow successful logging into IdM for a user whose account contains a certificate mapping data entry that matches the information on the certificate. Once the certificate mapping rules have been created, when the end-user presents the certificate, stored either on a filesystem or a smart card , authentication is successful. Note The Key Distribution Center (KDC) has a cache for certificate mapping rules. The cache is populated on the first certauth request and it has a hard-coded timeout of 300 seconds. KDC will not see any changes to certificate mapping rules unless it is restarted or the cache expires. For details on the individual components that make up a mapping rule and how to obtain and use them, see Components of an identity mapping rule in IdM and Obtaining the issuer from a certificate for use in a matching rule . Note Your certificate mapping rules can depend on the use case for which you are using the certificate. For example, if you are using SSH with certificates, you must have the full certificate to extract the public key from the certificate. 9.2. Components of an identity mapping rule in IdM You configure different components when creating an identity mapping rule in IdM. Each component has a default value that you can override. You can define the components in either the web UI or the CLI. In the CLI, the identity mapping rule is created using the ipa certmaprule-add command. Mapping rule The mapping rule component associates (or maps ) a certificate with one or more user accounts. The rule defines an LDAP search filter that associates a certificate with the intended user account. Certificates issued by different certificate authorities (CAs) might have different properties and might be used in different domains. Therefore, IdM does not apply mapping rules unconditionally, but only to the appropriate certificates. The appropriate certificates are defined using matching rules . Note that if you leave the mapping rule option empty, the certificates are searched in the userCertificate attribute as a DER encoded binary file. Define the mapping rule in the CLI using the --maprule option. Matching rule The matching rule component selects a certificate to which you want to apply the mapping rule. The default matching rule matches certificates with the digitalSignature key usage and clientAuth extended key usage. Define the matching rule in the CLI using the --matchrule option. Domain list The domain list specifies the identity domains in which you want IdM to search the users when processing identity mapping rules. If you leave the option unspecified, IdM searches the users only in the local domain to which the IdM client belongs. Define the domain in the CLI using the --domain option. Priority When multiple rules are applicable to a certificate, the rule with the highest priority takes precedence. All other rules are ignored. The lower the numerical value, the higher the priority of the identity mapping rule. For example, a rule with a priority 1 has higher priority than a rule with a priority 2. If a rule has no priority value defined, it has the lowest priority. Define the mapping rule priority in the CLI using the --priority option. Certificate mapping rule example To define, using the CLI, a certificate mapping rule called simple_rule that allows authentication for a certificate issued by the Smart Card CA of the EXAMPLE.ORG organization if the Subject on that certificate matches a certmapdata entry in a user account in IdM: 9.3. Obtaining data from a certificate for use in a matching rule This procedure describes how to obtain data from a certificate so that you can copy and paste it into the matching rule of a certificate mapping rule. To get data required by a matching rule, use the sssctl cert-show or sssctl cert-eval-rule commands. Prerequisites You have the user certificate in PEM format. Procedure Create a variable pointing to your certificate that also ensures it is correctly encoded so you can retrieve the required data. Use the sssctl cert-eval-rule to determine the matching data. In the following example the certificate serial number is used. In this case, add everything after altSecurityIdentities= to the altSecurityIdentities attribute in AD for the user. If using SKI mapping, use --map='LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u})' . Optional: To create a new mapping rule in the CLI based on a matching rule which specifies that the certificate issuer must match adcs19-WIN1-CA of the ad.example.com domain and the serial number of the certificate must match the altSecurityIdentities entry in a user account: 9.4. Configuring certificate mapping for users stored in IdM To enable certificate mapping in IdM if the user for whom certificate authentication is being configured is stored in IdM, a system administrator must complete the following tasks: Set up a certificate mapping rule so that IdM users with certificates that match the conditions specified in the mapping rule and in their certificate mapping data entries can authenticate to IdM. Enter certificate mapping data to an IdM user entry so that the user can authenticate using multiple certificates provided that they all contain the values specified in the certificate mapping data entry. Prerequisites The user has an account in IdM. The administrator has either the whole certificate or the certificate mapping data to add to the user entry. 9.4.1. Adding a certificate mapping rule in the IdM web UI Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 9.1. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make IdM search for the Issuer and Subject entries in any certificate presented to them, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the Smart Card CA of the EXAMPLE.ORG organization to authenticate users to IdM: Figure 9.2. Entering the details for a certificate mapping rule in the IdM web UI Click Add at the bottom of the dialog box to add the rule and close the box. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 9.4.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make IdM search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate, recognizing only certificates issued by the Smart Card CA of the EXAMPLE.ORG organization: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 9.4.3. Adding certificate mapping data to a user entry in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Users Active users idm_user . Find the Certificate mapping data option and click Add . Choose one of the following options: If you have the certificate of idm_user : On the command line, display the certificate using the cat utility or a text editor: Copy the certificate. In the IdM web UI, click Add to Certificate and paste the certificate into the window that opens up. Figure 9.3. Adding a user's certificate mapping data: certificate If you do not have the certificate of idm_user at your disposal but know the Issuer and the Subject of the certificate, check the radio button of Issuer and subject and enter the values in the two respective boxes. Figure 9.4. Adding a user's certificate mapping data: issuer and subject Click Add . Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 9.4.4. Adding certificate mapping data to a user entry in the IdM CLI Obtain the administrator's credentials: Choose one of the following options: If you have the certificate of idm_user , add the certificate to the user account using the ipa user-add-cert command: If you do not have the certificate of idm_user but know the Issuer and the Subject of the user's certificate: Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 9.5. Certificate mapping rules for trusts with Active Directory domains Different certificate mapping use cases are possible if an IdM deployment is in a trust relationship with an Active Directory (AD) domain. Depending on the AD configuration, the following scenarios are possible: If the certificate is issued by AD Certificate System but the user and the certificate are stored in IdM, the mapping and the whole processing of the authentication request takes place on the IdM side. For details of configuring this scenario, see Configuring certificate mapping for users stored in IdM If the user is stored in AD, the processing of the authentication request takes place in AD. There are three different subcases: The AD user entry contains the whole certificate. For details how to configure IdM in this scenario, see Configuring certificate mapping for users whose AD user entry contains the whole certificate . AD is configured to map user certificates to user accounts. In this case, the AD user entry does not contain the whole certificate but instead contains an attribute called altSecurityIdentities . For details how to configure IdM in this scenario, see Configuring certificate mapping if AD is configured to map user certificates to user accounts . The AD user entry contains neither the whole certificate nor the mapping data. In this case, there are two options: If the user certificate is issued by AD Certificate System, the certificate either contains the user principal name as the Subject Alternative Name (SAN) or, if the latest updates are applied to AD, the SID of the user in the SID extension of the certificate. Both of these can be used to map the certificate to the user. If the user certificate is on a smart card, to enable SSH with smart cards, SSSD must derive the public SSH key from the certificate and therefore the full certificate is required. The only solution is to use the ipa idoverrideuser-add command to add the whole certificate to the AD user's ID override in IdM. For details, see Configuring certificate mapping if AD user entry contains no certificate or mapping data . AD domain administrators can manually map certificates to a user in AD using the altSecurityIdentities attribute. There are six supported values for this attribute, though three mappings are considered insecure. As part of May 10,2022 security update , once it is installed, all devices are in compatibility mode and if a certificate is weakly mapped to a user, authentication occurs as expected. However, warning messages are logged identifying any certificates that are not compatible with full enforcement mode. As of November 14, 2023 or later, all devices will be updated to full enforcement mode and if a certificate fails the strong mapping criteria, authentication will be denied. For example, when an AD user requests an IdM Kerberos ticket with a certificate (PKINIT), AD needs to map the certificate to a user internally and uses the new mapping rules for this. However in IdM, the rules continue to work if IdM is used to map a certificate to a user on an IdM client, . IdM supports the new mapping templates, making it easier for an AD administrator to use the new rules and not maintain both. IdM now supports the new mapping templates added to Active Directory to include: Serial Number: LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur}) Subject Key Id: LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u}) User SID: LDAPU1:(objectsid={sid}) If you do not want to reissue certificates with the new SID extension, you can create a manual mapping by adding the appropriate mapping string to a user's altSecurityIdentities attribute in AD. 9.6. Configuring certificate mapping for users whose AD user entry contains the whole certificate This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains the whole certificate. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains a certificate. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. Note To ensure PKINIT works for a user, one of the following conditions must apply: The certificate in the user entry includes the user principal name or the SID extension for the user. The user entry in AD has a suitable entry in the altSecurityIdentities attribute. 9.6.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 9.5. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to what is available in AD: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Figure 9.6. Certificate mapping rule for a user with a certificate stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 9.6.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to what is available in AD, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 9.7. Configuring certificate mapping if AD is configured to map user certificates to user accounts This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD, and the user entry in AD contains certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. 9.7.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 9.7. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make AD DC search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate users to IdM: Enter the domain: Figure 9.8. Certificate mapping rule if AD is configured for mapping Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 9.7.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make AD search for the Issuer and Subject entries in any certificate presented, and only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 9.7.3. Checking certificate mapping data on the AD side The altSecurityIdentities attribute is the Active Directory (AD) equivalent of certmapdata user attribute in IdM. When configuring certificate mapping in IdM in the scenario when a trusted AD domain is configured to map user certificates to user accounts, the IdM system administrator needs to check that the altSecurityIdentities attribute is set correctly in the user entries in AD. Prerequisites The user account must have user administration access. Procedure To check that AD contains the right information for the user stored in AD, use the ldapsearch command. For example, enter the command below to check with the adserver.ad.example.com server that the following conditions apply: The altSecurityIdentities attribute is set in the user entry of ad_user . The matchrule stipulates that the following conditions apply: The certificate that ad_user uses to authenticate to AD was issued by AD-ROOT-CA of the ad.example.com domain. The subject is <S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user : 9.8. Configuring certificate mapping if AD user entry contains no certificate or mapping data This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains neither the whole certificate nor certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains neither the whole certificate nor the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has done one of the following: Added the whole AD user certificate to the AD user's user ID override in IdM. Created a certificate mapping rule that maps to an alternative field in the certificate, such as Subject Alternative Name or the SID of the user. 9.8.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 9.9. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Enter the domain name. For example, to search for users in the ad.example.com domain: Figure 9.10. Certificate mapping rule for a user with no certificate or mapping data stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI: 9.8.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 9.8.3. Adding a certificate to an AD user's ID override in the IdM web UI Navigate to Identity ID Views Default Trust View . Click Add . Figure 9.11. Adding a new user ID override in the IdM web UI In the User to override field, enter [email protected] . Copy and paste the certificate of ad_user into the Certificate field. Figure 9.12. Configuring the User ID override for an AD user Click Add . Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 9.8.4. Adding a certificate to an AD user's ID override in the IdM CLI Obtain the administrator's credentials: Store the certificate blob in a new variable called CERT : Add the certificate of [email protected] to the user account using the ipa idoverrideuser-add-cert command: Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 9.9. Combining several identity mapping rules into one To combine several identity mapping rules into one combined rule, use the | (or) character to precede the individual mapping rules, and separate them using () brackets, for example: Certificate mapping filter example 1 In the above example, the filter definition in the --maprule option includes these criteria: ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates The addition of the --domain=ad.example.com option means that users mapped to a given certificate are not only searched in the local idm.example.com domain but also in the ad.example.com domain The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. Certificate mapping filter example 2 In the above example, the filter definition in the --maprule option includes these criteria: userCertificate;binary={cert!bin} is a filter that returns user entries that include the whole certificate. For AD users, creating this type of filter is described in detail in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data . ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM . altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates . The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. 9.10. Additional resources sss-certmap(5) man page on your system | [
"ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'",
"CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)",
"sssctl cert-eval-rule USDCERT --match='<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --map='LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})' Certificate matches rule. Mapping filter: (altSecurityIdentities=X509:<I>DC=com,DC=example,DC=ad,CN=adcs19-WIN1-CA<SR>0F0000000000DB8852DD7B246C9C0F0000003B)",
"ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --maprule 'LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})'",
"(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})",
"<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE",
"systemctl restart sssd",
"cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]",
"sss_cache -u idm_user",
"ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------",
"kinit admin",
"CERT=USD(openssl x509 -in idm_user_cert.pem -outform der|base64 -w0) ipa user-add-certmapdata idm_user --certificate USDCERT",
"ipa user-add-certmapdata idm_user --subject \"O=EXAMPLE.ORG,CN=test\" --issuer \"CN=Smart Card CA,O=EXAMPLE.ORG\" -------------------------------------------- Added certificate mappings to user \"idm_user\" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG",
"sss_cache -u idm_user",
"ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------",
"(userCertificate;binary={cert!bin})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"ad.example.com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user",
"(userCertificate;binary={cert!bin})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"sss_cache -u [email protected]",
"ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------",
"kinit admin",
"CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)",
"ipa idoverrideuser-add-cert [email protected] --certificate USDCERT",
"sss_cache -u [email protected]",
"ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------",
"ipa certmaprule-add ad_cert_for_ipa_and_ad_users --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --domain=ad.example.com",
"ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/conf-certmap-idm_working-with-idm-certificates |
Chapter 5. Changes in LLVM Toolset | Chapter 5. Changes in LLVM Toolset LLVM Toolset has been updated from version 16.0.1 to 17.0.6 on RHEL 8 and RHEL 9. Notable changes include: The opaque pointers migration is now completed. Removed support for the legacy pass manager in middle-end optimization. Clang changes: C++20 coroutines are no longer considered experimental. Improved code generation for the std::move function and similar in unoptimized builds. For detailed information regarding the updates, see LLVM and Clang upstream release notes. | null | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_17.0.6_toolset/assembly_changes-in-llvm-toolset_using-llvm-toolset |
Chapter 1. Overview | Chapter 1. Overview AMQ Spring Boot Starter is an adapter for creating Spring-based applications that use AMQ messaging. It provides a Spring Boot starter module that enables you to build standalone Spring applications. The starter uses the AMQ JMS client to communicate using the AMQP 1.0 protocol. AMQ Spring Boot Starter is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.9 Release Notes . AMQ Spring Boot Starter is based on the AMQP 1.0 JMS Spring Boot project. 1.1. Key features Quickly build standalone Spring applications with built-in messaging Automatic configuration of JMS resources Configurable pooling of JMS connections and sessions 1.2. Supported standards and protocols Version 2.2 of the Spring Boot API Version 2.0 of the Java Message Service API Version 1.0 of the Advanced Message Queueing Protocol (AMQP) 1.3. Supported configurations AMQ Spring Boot Starter supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with the following JDKs: OpenJDK 8 and 11 Oracle JDK 8 IBM JDK 8 IBM AIX 7.1 with IBM JDK 8 Microsoft Windows 10 Pro with Oracle JDK 8 Microsoft Windows Server 2012 R2 and 2016 with Oracle JDK 8 Oracle Solaris 10 and 11 with Oracle JDK 8 AMQ Spring Boot Starter is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/overview |
Cluster Observability Operator | Cluster Observability Operator OpenShift Container Platform 4.17 Configuring and using the Cluster Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo",
"oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields",
"managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status",
"apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"oc get prometheus -n coo-demo",
"managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply",
"changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts",
"oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts",
"prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info",
"oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'",
"info",
"apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP",
"oc apply -f prometheus-coo-example-app.yaml",
"oc -n ns1-coo get pod",
"NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m",
"apiVersion: monitoring.rhobs/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app",
"oc apply -f example-coo-app-service-monitor.yaml",
"oc -n ns1-coo get servicemonitors.monitoring.rhobs",
"NAME AGE prometheus-coo-example-monitor 81m",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor",
"oc apply -f example-coo-monitoring-stack.yaml",
"oc -n ns1-coo get monitoringstack",
"NAME AGE example-coo-monitoring-stack 81m",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app==\"prometheus-coo-example-app\")'",
"{ \"__address__\": \"10.129.2.25:8080\", \"__meta_kubernetes_endpoint_address_target_kind\": \"Pod\", \"__meta_kubernetes_endpoint_address_target_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_endpoint_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_endpoint_port_name\": \"web\", \"__meta_kubernetes_endpoint_port_protocol\": \"TCP\", \"__meta_kubernetes_endpoint_ready\": \"true\", \"__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time\": \"2024-11-05T11:24:09Z\", \"__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time\": \"true\", \"__meta_kubernetes_endpoints_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_endpoints_labelpresent_app\": \"true\", \"__meta_kubernetes_endpoints_name\": \"prometheus-coo-example-app\", \"__meta_kubernetes_namespace\": \"ns1-coo\", \"__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks\": \"{\\\"default\\\":{\\\"ip_addresses\\\":[\\\"10.129.2.25/23\\\"],\\\"mac_address\\\":\\\"0a:58:0a:81:02:19\\\",\\\"gateway_ips\\\":[\\\"10.129.2.1\\\"],\\\"routes\\\":[{\\\"dest\\\":\\\"10.128.0.0/14\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"172.30.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"100.64.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"}],\\\"ip_address\\\":\\\"10.129.2.25/23\\\",\\\"gateway_ip\\\":\\\"10.129.2.1\\\",\\\"role\\\":\\\"primary\\\"}}\", \"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\": \"[{\\n \\\"name\\\": \\\"ovn-kubernetes\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.129.2.25\\\"\\n ],\\n \\\"mac\\\": \\\"0a:58:0a:81:02:19\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\", \"__meta_kubernetes_pod_annotation_openshift_io_scc\": \"restricted-v2\", \"__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod\": \"runtime/default\", \"__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks\": \"true\", \"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\": \"true\", \"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\": \"true\", \"__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod\": \"true\", \"__meta_kubernetes_pod_controller_kind\": \"ReplicaSet\", \"__meta_kubernetes_pod_controller_name\": \"prometheus-coo-example-app-5d8cd498c7\", \"__meta_kubernetes_pod_host_ip\": \"10.0.128.2\", \"__meta_kubernetes_pod_ip\": \"10.129.2.25\", \"__meta_kubernetes_pod_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_pod_label_pod_template_hash\": \"5d8cd498c7\", \"__meta_kubernetes_pod_labelpresent_app\": \"true\", \"__meta_kubernetes_pod_labelpresent_pod_template_hash\": \"true\", \"__meta_kubernetes_pod_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_pod_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_pod_phase\": \"Running\", \"__meta_kubernetes_pod_ready\": \"true\", \"__meta_kubernetes_pod_uid\": \"054c11b6-9a76-4827-a860-47f3a4596871\", \"__meta_kubernetes_service_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_service_labelpresent_app\": \"true\", \"__meta_kubernetes_service_name\": \"prometheus-coo-example-app\", \"__metrics_path__\": \"/metrics\", \"__scheme__\": \"http\", \"__scrape_interval__\": \"30s\", \"__scrape_timeout__\": \"10s\", \"job\": \"serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0\" }",
"oc expose svc prometheus-coo-example-app -n ns1-coo",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total'",
"{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"200\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"3\" ] }, { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"404\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"0\" ] } ] } }",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki logsLimit: 50 timeout: 30s",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: distributed-tracing spec: type: DistributedTracing",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: troubleshooting-panel spec: type: TroubleshootingPanel",
"apiVersion: apps/v1 kind: Deployment metadata: name: bad-deployment namespace: default 1 spec: selector: matchLabels: app: bad-deployment template: metadata: labels: app: bad-deployment spec: containers: 2 - name: bad-deployment image: quay.io/openshift-logging/vector:5.8"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/cluster_observability_operator/index |
A.4. Explanation of settings in the New Virtual Disk and Edit Virtual Disk windows | A.4. Explanation of settings in the New Virtual Disk and Edit Virtual Disk windows Note The following tables do not include information on whether a power cycle is required because that information is not applicable to these scenarios. Table A.21. New Virtual Disk and Edit Virtual Disk settings: Image Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but you can install them from the virtio-win ISO image. IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Allocation Policy The provisioning policy for the new virtual disk. Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thin provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible. Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thin provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thin provisioned virtual disks are recommended for desktops. Disk Profile The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Wipe After Delete Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted. Bootable Enables the bootable flag on the virtual disk. Shareable Attaches the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. For block storage, the underlying storage device must support discard calls, and the option cannot be used with Wipe After Delete unless the underlying storage supports the discard_zeroes_data property. For file storage, the underlying file system and the block device must support discard calls. If all requirements are met, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets . Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs. Table A.22. New Virtual Disk and Edit Virtual Disk settings: Direct LUN Field Name Description Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field. The default behavior can be configured by setting the PopulateDirectLUNDiskDescriptionWithLUNId configuration key to the appropriate value using the engine-config command. The configuration key can be set to -1 for the full LUN ID to be used, or 0 for this feature to be ignored. A positive integer populates the description with the corresponding number of characters of the LUN ID. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but you can install them from the virtio-win ISO image. IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Host The host on which the LUN will be mounted. You can select any host in the data center. Storage Type The type of external LUN to add. You can select from either iSCSI or Fibre Channel . Discover Targets This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected. Address - The host name or IP address of the target server. Port - The port by which to attempt a connection to the target server. The default port is 3260. User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs. CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Bootable Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. With this option enabled, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. Enable SCSI Pass-Through Available when the Interface is set to VirtIO-SCSI . Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. Read Only is not supported when this check box is selected. When this check box is not selected, the virtual disk uses an emulated SCSI device. Read Only is supported on emulated VirtIO-SCSI disks. Allow Privileged SCSI I/O Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations. Using SCSI Reservation Available when the Enable SCSI Pass-Through and Allow Privileged SCSI I/O check boxes are selected. Selecting this check box disables migration for any virtual machine using this disk, to prevent virtual machines that are using SCSI reservation from losing access to the disk. Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons to each LUN, select the LUN to add. Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data. The following considerations must be made when using a direct LUN as a virtual machine hard disk image: Live storage migration of direct LUN hard disk images is not supported. Direct LUN disks are not included in virtual machine exports. Direct LUN disks are not included in virtual machine snapshots. Important Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3 , EXT4 , or XFS ). | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Add_Virtual_Disk_dialogue_entries |
Chapter 18. Object Storage (swift) Parameters | Chapter 18. Object Storage (swift) Parameters You can modify the swift service with object storage parameters. Parameter Description MemcachedTLS Set to True to enable TLS on Memcached service. Because not all services support Memcached TLS, during the migration period, Memcached will listen on 2 ports - on the port set with MemcachedPort parameter (above) and on 11211, without TLS. The default value is false . SwiftAccountWorkers Number of workers for Swift account service. The default value is 0 . SwiftCeilometerIgnoreProjects Comma-seperated list of project names to ignore. The default value is ['service'] . SwiftCeilometerPipelineEnabled Set to False to disable the object storage proxy ceilometer pipeline. The default value is false . SwiftContainerSharderEnabled Set to True to enable Swift container sharder service. The default value is false . SwiftContainerWorkers Number of workers for Swift account service. The default value is 0 . SwiftCorsAllowedOrigin Indicate whether this resource may be shared with the domain received in the request "origin" header. SwiftEncryptionEnabled Set to True to enable data-at-rest encryption in Swift. The default value is false . SwiftHashPrefix A random string to be used as an extra salt when hashing to determine mappings in the ring. SwiftHashSuffix A random string to be used as a salt when hashing to determine mappings in the ring. SwiftMinPartHours The minimum time (in hours) before a partition in a ring can be moved following a rebalance. The default value is 1 . SwiftMountCheck Check if the devices are mounted to prevent accidentally writing to the root device. The default value is false . SwiftObjectWorkers Number of workers for Swift account service. The default value is 0 . SwiftPartPower Partition power to use when building object storage rings. The default value is 10 . SwiftPassword The password for the object storage service account. SwiftProxyNodeTimeout Timeout for requests going from swift-proxy to account, container, and object services. The default value is 60 . SwiftProxyRecoverableNodeTimeout Timeout for GET/HEAD requests going from swift-proxy to swift a/c/o services. The default value is 30 . SwiftRawDisks Additional raw devices to use for the object storage backend. For example: {sdb: {}} SwiftReplicas Number of replicas to use in the object storage rings. The default value is 3 . SwiftRingBuild Whether to manage object storage rings or not. The default value is true . SwiftRingGetTempurl A temporary Swift URL to download rings from. SwiftRingPutTempurl A temporary Swift URL to upload rings to. SwiftUseLocalDir Use a local directory for object storage services when building rings. The default value is true . SwiftUseNodeDataLookup Use NodeDataLookup for disk devices in order to use persistent naming. The default value is false . SwiftWorkers Number of workers for object storage service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. The default value is 0 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_object-storage-swift-parameters_overcloud_parameters |
Chapter 13. Custom resource API reference | Chapter 13. Custom resource API reference 13.1. Common configuration properties Common configuration properties apply to more than one resource. 13.1.1. replicas Use the replicas property to configure replicas. The type of replication depends on the resource. KafkaTopic uses a replication factor to configure the number of replicas of each partition within a Kafka cluster. Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability. Note When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running. 13.1.2. bootstrapServers Use the bootstrapServers property to configure a list of bootstrap servers. The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by AMQ Streams. If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME -kafka-bootstrap and a port number. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, ingress, nodeports or loadbalancers). When using Kafka with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster. 13.1.3. ssl Use the three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example SSL configuration # ... spec: config: ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1 ssl.enabled.protocols: "TLSv1.2" 2 ssl.protocol: "TLSv1.2" 3 ssl.endpoint.identification.algorithm: HTTPS 4 # ... 1 The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm. 2 The SSl protocol TLSv1.2 is enabled. 3 Specifies the TLSv1.2 protocol to generate the SSL context. Allowed values are TLSv1.1 and TLSv1.2 . 4 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 13.1.4. trustedCertificates Having set tls to configure TLS encryption, use the trustedCertificates property to provide a list of secrets with key names under which the certificates are stored in X.509 format. You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-TLS-CERTIFICATE-FILE.crt Example TLS encryption configuration tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt If certificates are stored in the same secret, it can be listed multiple times. If you want to enable TLS, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array: Example of enabling TLS with the default Java certificates tls: trustedCertificates: [] For information on configuring TLS client authentication, see KafkaClientAuthenticationTls schema reference . 13.1.5. resources You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container. Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource. Use the reources.requests and resources.limits properties to configure resource requests and limits. For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources. AMQ Streams supports requests and limits for the following types of resources: cpu memory AMQ Streams uses the OpenShift syntax for specifying these resources. For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers . Resource requests Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available. Important If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled. A request may be configured for one or more supported resources. Example resource requests configuration # ... resources: requests: cpu: 12 memory: 64Gi # ... Resource limits Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests. A resource may be configured for one or more supported limits. Example resource limits configuration # ... resources: limits: cpu: 12 memory: 64Gi # ... Supported CPU formats CPU requests and limits are supported in the following formats: Number of CPU cores as integer ( 5 CPU core) or decimal ( 2.5 CPU core). Number or millicpus / millicores ( 100m ) where 1000 millicores is the same 1 CPU core. Example CPU units # ... resources: requests: cpu: 500m limits: cpu: 2.5 # ... Note The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed. For more information on CPU specification, see the Meaning of CPU . Supported memory formats Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. To specify memory in megabytes, use the M suffix. For example 1000M . To specify memory in gigabytes, use the G suffix. For example 1G . To specify memory in mebibytes, use the Mi suffix. For example 1000Mi . To specify memory in gibibytes, use the Gi suffix. For example 1Gi . Example resources using different memory units # ... resources: requests: memory: 512Mi limits: memory: 2Gi # ... For more details about memory specification and additional supported units, see Meaning of memory . 13.1.6. image Use the image property to configure the container image used by the component. Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image. For example, if your network does not allow access to the container repository used by AMQ Streams, you can copy the AMQ Streams images or build them from the source. However, if the configured image is not compatible with AMQ Streams images, it might not work properly. A copy of the container image might also be customized and used for debugging. You can specify which container image to use for a component using the image property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.tlsSidecar KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables: STRIMZI_KAFKA_IMAGES STRIMZI_KAFKA_CONNECT_IMAGES STRIMZI_KAFKA_CONNECT_S2I_IMAGES STRIMZI_KAFKA_MIRROR_MAKER_IMAGES These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties: If neither image nor version are given in the custom resource then the version will default to the Cluster Operator's default Kafka version, and the image will be the one corresponding to this version in the environment variable. If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator's default Kafka version. If version is given but image is not, then the image that corresponds to the given version in the environment variable is used. If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version. The image and version for the different components can be configured in the following properties: For Kafka in spec.kafka.image and spec.kafka.version . For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version . Warning It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator's environment variables. Configuring the image property in other resources For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used. For Topic Operator: Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image. For User Operator: Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image. For Entity Operator TLS sidecar: Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 container image. For Kafka Exporter: Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 container image. For Kafka Bridge: Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.7.0 container image. For Kafka broker initializer: Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image. Example of container image configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ... 13.1.7. livenessProbe and readinessProbe healthchecks Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in AMQ Streams. Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it. For more details about the probes, see Configure Liveness and Readiness Probes . Both livenessProbe and readinessProbe support the following options: initialDelaySeconds timeoutSeconds periodSeconds successThreshold failureThreshold Example of liveness and readiness probe configuration # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... For more information about the livenessProbe and readinessProbe options, see Probe schema reference . 13.1.8. metricsConfig Use the metricsConfig property to enable and configure Prometheus metrics. The metricsConfig property contains a reference to a ConfigMap containing additional configuration for the Prometheus JMX exporter . AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . When referencing an empty file, all metrics are exposed as long as they have not been renamed. Example ConfigMap with metrics configuration for Kafka kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: "USD3" topic: "USD4" partition: "USD5" # further configuration Example metrics configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # ... zookeeper: # ... When metrics are enabled, they are exposed on port 9404. When the metricsConfig (or deprecated metrics ) property is not defined in the resource, the Prometheus metrics are disabled. For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide. 13.1.9. jvmOptions The following AMQ Streams components run inside a Java Virtual Machine (JVM): Apache Kafka Apache ZooKeeper Apache Kafka Connect Apache Kafka MirrorMaker AMQ Streams Kafka Bridge To optimize their performance on different platforms and architectures, you configure the jvmOptions property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec You can specify the following options in your configuration: -Xms Minimum initial allocation heap size when the JVM starts. -Xmx Maximum heap size. -XX Advanced runtime options for the JVM. javaSystemProperties Additional system properties. gcLoggingEnabled Enables garbage collector logging . The full schema of jvmOptions is described in JvmOptions schema reference . Note The units accepted by JVM settings, such as -Xmx and -Xms , are the same units accepted by the JDK java binary in the corresponding image. Therefore, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is different from the units used for memory requests and limits , which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes -Xms and -Xmx options The default values used for -Xms and -Xmx depend on whether there is a memory request limit configured for the container. If there is a memory limit, the JVM's minimum and maximum memory is set to a value corresponding to the limit. If there is no memory limit, the JVM's minimum memory is set to 128M . The JVM's maximum memory is not defined to allow the memory to increase as needed. This is ideal for single node environments in test and development. Before setting -Xmx explicitly consider the following: The JVM's overall memory usage will be approximately 4 x the maximum heap, as configured by -Xmx . If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure from other Pods running on it. If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash immediately if -Xms is set to -Xmx , or at a later time if not. It is recommended to: Set the memory request and the memory limit to the same value Use a memory request that is at least 4.5 x the -Xmx Consider setting -Xms to the same value as -Xmx In this example, the JVM uses 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage is approximately 8GiB. Example -Xmx and -Xms configuration # ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ... Setting the same value for initial ( -Xms ) and maximum ( -Xmx ) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. Important Containers performing lots of disk I/O, such as Kafka broker containers, require available memory for use as an operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM. -XX option -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka. Example -XX configuration jvmOptions: "-XX": "UseG1GC": true "MaxGCPauseMillis": 20 "InitiatingHeapOccupancyPercent": 35 "ExplicitGCInvokesConcurrent": true JVM options resulting from the -XX configuration Note When no -XX options are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used. javaSystemProperties javaSystemProperties are used to configure additional Java system properties, such as debugging utilities. Example javaSystemProperties configuration jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl 13.1.10. Garbage collector logging The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows: Example GC logging configuration # ... jvmOptions: gcLoggingEnabled: true # ... 13.2. Schema properties 13.2.1. Kafka schema reference Property Description spec The specification of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaSpec status The status of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaStatus 13.2.2. KafkaSpec schema reference Used in: Kafka Property Description kafka Configuration of the Kafka cluster. KafkaClusterSpec zookeeper Configuration of the ZooKeeper cluster. ZookeeperClusterSpec topicOperator The topicOperator property has been deprecated, and should now be configured using spec.entityOperator.topicOperator . The property topicOperator is removed in API version v1beta2 . Configuration of the Topic Operator. TopicOperatorSpec entityOperator Configuration of the Entity Operator. EntityOperatorSpec clusterCa Configuration of the cluster certificate authority. CertificateAuthority clientsCa Configuration of the clients certificate authority. CertificateAuthority cruiseControl Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. CruiseControlSpec kafkaExporter Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. KafkaExporterSpec maintenanceTimeWindows A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. string array 13.2.3. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster. 13.2.3.1. listeners Use the listeners property to configure listeners to provide access to Kafka brokers. Example configuration of a plain (unencrypted) listener without authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ... 13.2.3.2. config Use the config properties to configure Kafka broker options as keys. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Security (Encryption, Authentication, and Authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: listeners advertised. broker. listener. host.name port inter.broker.listener.name sasl. ssl. security. password. principal.builder.class log.dir zookeeper.connect zookeeper.set.acl authorizer. super.user When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection. Example Kafka broker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" zookeeper.connection.timeout.ms: 6000 # ... 13.2.3.3. brokerRackInitImage When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by AMQ Streams. In this case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly. 13.2.3.4. logging Kafka has its own configurable loggers: log4j.logger.org.I0Itec.zkclient.ZkClient log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.3.5. KafkaClusterSpec schema properties Property Description version The kafka broker version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the cluster. integer image The docker image for the pods. The default value depends on the configured Kafka.spec.kafka.version . string listeners Configures listeners of Kafka brokers. GenericKafkaListener array or KafkaListeners config Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., principal.builder.class, log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers (with the exception of: zookeeper.connection.timeout.ms, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms,cruise.control.metrics.topic.min.insync.replicas). map storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage authorization Authorization configuration for Kafka brokers. The type depends on the value of the authorization.type property within the given object, which must be one of [simple, opa, keycloak]. KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak rack Configuration of the broker.rack broker config. Rack brokerRackInitImage The image of the init container used for initializing the broker.rack . string affinity The affinity property has been deprecated, and should now be configured using spec.kafka.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.kafka.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options for Kafka brokers. KafkaJmxOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metrics The metrics property has been deprecated, and should now be configured using spec.kafka.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for Kafka. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging tlsSidecar The tlsSidecar property has been deprecated. The property tlsSidecar is removed in API version v1beta2 . TLS sidecar configuration. TlsSidecar template Template for Kafka cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. KafkaClusterTemplate 13.2.4. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. You configure the listeners in the Kafka resource. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... 13.2.4.1. listeners You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example listener configuration listeners: - name: plain port: 9092 type: internal tls: false The name and port must be unique within the Kafka cluster. The name can be up to 25 characters long, comprising lower-case letters and numbers. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. By specifying a unique name and port for each listener, you can configure multiple listeners. 13.2.4.2. type The type is set as internal , or for external listeners, as route , loadbalancer , nodeport or ingress . internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... Note External listeners using Ingress are currently only tested with the NGINX Ingress Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka Loadbalancer type Services . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using NodePort type Services . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. 13.2.4.3. port The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' Note Listeners cannot be configured to use the ports set aside for interbroker communication (9091) and metrics (9404). 13.2.4.4. tls The TLS property is required. By default, TLS encryption is not enabled. To enable it, set the tls property to true . TLS encryption is always used with route listeners. 13.2.4.5. authentication Authentication for the listener can be specified as: Mutual TLS ( tls ) SCRAM-SHA-512 ( scram-sha-512 ) Token-based OAuth 2.0 ( oauth ). 13.2.4.6. networkPolicyPeers Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... In the example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers field is the same as the from field in NetworkPolicy resources. Backwards compatibility with KafkaListeners GenericKafkaListener replaces the KafkaListeners schema, which is now deprecated. To convert the listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema, with backwards compatibility, use the following names, ports and types: listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 1 tls: true # ... 1 Options: ingress , loadbalancer , nodeport , route 13.2.4.7. GenericKafkaListener schema properties Property Description name Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. string port Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. integer type Type of the listener. Currently the supported types are internal , route , loadbalancer , nodeport and ingress . * internal type exposes Kafka internally only within the OpenShift cluster. * route type uses OpenShift Routes to expose Kafka. * loadbalancer type uses LoadBalancer type services to expose Kafka. * nodeport type uses NodePort type services to expose Kafka. * ingress type uses OpenShift Nginx Ingress to expose Kafka. . string (one of [ingress, internal, route, loadbalancer, nodeport]) tls Enables TLS encryption on the listener. This is a required property. boolean authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth configuration Additional listener configuration. GenericKafkaListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.5. KafkaListenerAuthenticationTls schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationTls type from KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value tls for the type KafkaListenerAuthenticationTls . Property Description type Must be tls . string 13.2.6. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Description type Must be scram-sha-512 . string 13.2.7. KafkaListenerAuthenticationOAuth schema reference Used in: GenericKafkaListener , KafkaListenerExternalIngress , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalRoute , KafkaListenerPlain , KafkaListenerTls The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationOAuth type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 . It must have the value oauth for the type KafkaListenerAuthenticationOAuth . Property Description accessTokenIsJwt Configure whether the access token is treated as JWT. This must be set to false if the authorization server returns opaque tokens. Defaults to true . boolean checkAccessTokenType Configure whether the access token type check is performed or not. This should be set to false if the authorization server does not include 'typ' claim in JWT token. Defaults to true . boolean checkAudience Enable or disable audience checking. Audience checks identify the recipients of tokens. If audience checking is enabled, the OAuth Client ID also has to be configured using the clientId property. The Kafka broker will reject tokens that do not have its clientId in their aud (audience) claim.Default value is false . boolean checkIssuer Enable or disable issuer checking. By default issuer is checked using the value configured by validIssuerUri . Default value is true . boolean clientId OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. GenericSecretSource customClaimCheck JsonPath filter query to be applied to the JWT token or to the response of the introspection endpoint for additional token validation. Not set by default. string disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean enableECDSA Enable or disable ECDSA support by installing BouncyCastle crypto provider. Default value is false . boolean enableOauthBearer Enable or disable OAuth authentication over SASL_OAUTHBEARER. Default value is true . boolean enablePlain Enable or disable OAuth authentication over SASL_PLAIN. There is no re-authentication support when this mechanism is used. Default value is false . boolean fallbackUserNameClaim The fallback username claim to be used for the user id if the claim specified by userNameClaim is not present. This is useful when client_credentials authentication only results in the client id being provided in another claim. It only takes effect if userNameClaim is set. string fallbackUserNamePrefix The prefix to use with the value of fallbackUserNameClaim to construct the user id. This only takes effect if fallbackUserNameClaim is true, and the value is present for the claim. Mapping usernames and client ids into the same user id space is useful in preventing name collisions. string introspectionEndpointUri URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens. string jwksEndpointUri URI of the JWKS certificate endpoint, which can be used for local JWT validation. string jwksExpirySeconds Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in jwksRefreshSeconds . Defaults to 360 seconds. integer jwksMinRefreshPauseSeconds The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second. integer jwksRefreshSeconds Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in jwksExpirySeconds . Defaults to 300 seconds. integer maxSecondsWithoutReauthentication Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. This option only applies to SASL_OAUTHBEARER authentication mechanism (when enableOauthBearer is true ). integer tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri URI of the Token Endpoint to use with SASL_PLAIN mechanism when the client authenticates with clientId and a secret. string type Must be oauth . string userInfoEndpointUri URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id. string userNameClaim Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to sub . string validIssuerUri URI of the token issuer used for authentication. string validTokenType Valid value for the token_type attribute returned by the Introspection Endpoint. No default value, and not checked by default. string 13.2.8. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationOAuth Property Description key The key under which the secret value is stored in the OpenShift Secret. string secretName The name of the OpenShift Secret containing the secret value. string 13.2.9. CertSecretSource schema reference Used in: KafkaAuthorizationKeycloak , KafkaBridgeTls , KafkaClientAuthenticationOAuth , KafkaConnectTls , KafkaListenerAuthenticationOAuth , KafkaMirrorMaker2Tls , KafkaMirrorMakerTls Property Description certificate The name of the file certificate in the Secret. string secretName The name of the Secret containing the certificate. string 13.2.10. GenericKafkaListenerConfiguration schema reference Used in: GenericKafkaListener Full list of GenericKafkaListenerConfiguration schema properties Configuration for Kafka listeners. 13.2.10.1. brokerCertChainAndKey The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to providing your own Kafka listener certificates. Example configuration for a loadbalancer external listener with TLS encryption enabled listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... 13.2.10.2. externalTrafficPolicy The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster . Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster . 13.2.10.3. loadBalancerSourceRanges The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created. Example source ranges configured for a loadbalancer listener listeners: #... - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ... 13.2.10.4. class The class property is only used with ingress listeners. You can configure the Ingress class using the class property. Example of an external listener of type ingress using Ingress class nginx-internal listeners: #... - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal # ... # ... 13.2.10.5. preferredNodePortAddressType The preferredNodePortAddressType property is only used with nodeport listeners. Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority: ExternalDNS ExternalIP Hostname InternalDNS InternalIP Example of an external listener configured with a preferred node port address type listeners: #... - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ... 13.2.10.6. useServiceDnsDomain The useServiceDnsDomain property is only used with internal listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local ) are used. With useServiceDnsDomain set as false , the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc . With useServiceDnsDomain set as true , the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local . Default is false . Example of an internal listener configured to use the Service DNS domain listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ... If your OpenShift cluster uses a different service suffix than .cluster.local , you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. See Section 5.1.1, "Cluster Operator configuration" for more details. 13.2.10.7. GenericKafkaListenerConfiguration schema properties Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption. CertAndKeySecretSource externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener. string (one of [Local, Cluster]) loadBalancerSourceRanges A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ . This field can be used only with loadbalancer type listener. string array bootstrap Bootstrap configuration. GenericKafkaListenerConfigurationBootstrap brokers Per-broker configurations. GenericKafkaListenerConfigurationBroker array class Configures the Ingress class that defines which Ingress controller will be used. This field can be used only with ingress type listener. If not specified, the default Ingress controller will be used. string preferredNodePortAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.This field can be used only with nodeport type listener.. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) useServiceDnsDomain Configures whether the OpenShift service DNS domain should be used or not. If set to true , the generated addresses will contain the service DNS domain suffix (by default .cluster.local , can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN ). Defaults to false .This field can be used only with internal type listener. boolean 13.2.11. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , IngressListenerConfiguration , KafkaClientAuthenticationTls , KafkaListenerExternalConfiguration , NodePortListenerConfiguration , TlsListenerConfiguration Property Description certificate The name of the file certificate in the Secret. string key The name of the private key in the Secret. string secretName The name of the Secret containing the certificate. string 13.2.12. GenericKafkaListenerConfigurationBootstrap schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBootstrap schema properties Broker service equivalents of nodePort , host , loadBalancerIP and annotations properties are configured in the GenericKafkaListenerConfigurationBroker schema . 13.2.12.1. alternativeNames You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames property is applicable to all types of listeners. Example of an external route listener configured with an additional bootstrap address listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ... 13.2.12.2. host The host property is used with route and ingress listeners to specify the hostnames used by the bootstrap and per-broker services. A host property value is mandatory for ingress listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints. Example of host configuration for an ingress listener listeners: #... - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... By default, route listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts. AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used. Example of host configuration for a route listener # ... listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ... 13.2.12.3. nodePort By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport listeners by specifying the requested port numbers. AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use. Example of an external listener configured with overrides for node ports # ... listeners: #... - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ... 13.2.12.4. loadBalancerIP Use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature. Example of an external listener of type loadbalancer with specific loadbalancer IP address requests # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3 # ... 13.2.12.5. annotations Use the annotations property to add annotations to OpenShift resources related to the listeners. You can use these annotations, for example, to instrument DNS tooling such as External DNS , which automatically assigns DNS names to the loadbalancer services. Example of an external listener of type loadbalancer using annotations # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ... 13.2.12.6. GenericKafkaListenerConfigurationBootstrap schema properties Property Description alternativeNames Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. string array host The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the bootstrap service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress , Route , or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map 13.2.13. GenericKafkaListenerConfigurationBroker schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBroker schema properties You can see example configuration for the nodePort , host , loadBalancerIP and annotations properties in the GenericKafkaListenerConfigurationBootstrap schema , which configures bootstrap service overrides. Advertised addresses for brokers By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can specify a broker ID and customize the advertised hostname and port in the configuration property of the listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of listeners. Example of an external route listener configured with overrides for advertised addresses listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ... 13.2.13.1. GenericKafkaListenerConfigurationBroker schema properties Property Description broker ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the per-broker service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map 13.2.14. KafkaListeners schema reference The type KafkaListeners has been deprecated and is removed in API version v1beta2 . Please use GenericKafkaListener instead. Used in: KafkaClusterSpec Refer to documentation for example configuration. Property Description plain Configures plain listener on port 9092. KafkaListenerPlain tls Configures TLS listener on port 9093. KafkaListenerTls external Configures external listener on port 9094. The type depends on the value of the external.type property within the given object, which must be one of [route, loadbalancer, nodeport, ingress]. KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalIngress 13.2.15. KafkaListenerPlain schema reference Used in: KafkaListeners Property Description authentication Authentication configuration for this listener. Since this listener does not use TLS transport you cannot configure an authentication with type: tls . The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.16. KafkaListenerTls schema reference Used in: KafkaListeners Property Description authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth configuration Configuration of TLS listener. TlsListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.17. TlsListenerConfiguration schema reference Used in: KafkaListenerTls Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource 13.2.18. KafkaListenerExternalRoute schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes use of the KafkaListenerExternalRoute type from KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort , KafkaListenerExternalIngress . It must have the value route for the type KafkaListenerExternalRoute . Property Description type Must be route . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. RouteListenerOverride configuration External listener configuration. KafkaListenerExternalConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.19. RouteListenerOverride schema reference Used in: KafkaListenerExternalRoute Property Description bootstrap External bootstrap service configuration. RouteListenerBootstrapOverride brokers External broker services configuration. RouteListenerBrokerOverride array 13.2.20. RouteListenerBootstrapOverride schema reference Used in: RouteListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string host Host for the bootstrap route. This field will be used in the spec.host field of the OpenShift Route. string 13.2.21. RouteListenerBrokerOverride schema reference Used in: RouteListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host Host for the broker route. This field will be used in the spec.host field of the OpenShift Route. string 13.2.22. KafkaListenerExternalConfiguration schema reference Used in: KafkaListenerExternalLoadBalancer , KafkaListenerExternalRoute Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource 13.2.23. KafkaListenerExternalLoadBalancer schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes use of the KafkaListenerExternalLoadBalancer type from KafkaListenerExternalRoute , KafkaListenerExternalNodePort , KafkaListenerExternalIngress . It must have the value loadbalancer for the type KafkaListenerExternalLoadBalancer . Property Description type Must be loadbalancer . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. LoadBalancerListenerOverride configuration External listener configuration. KafkaListenerExternalConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array tls Enables TLS encryption on the listener. By default set to true for enabled TLS encryption. boolean 13.2.24. LoadBalancerListenerOverride schema reference Used in: KafkaListenerExternalLoadBalancer Property Description bootstrap External bootstrap service configuration. LoadBalancerListenerBootstrapOverride brokers External broker services configuration. LoadBalancerListenerBrokerOverride array 13.2.25. LoadBalancerListenerBootstrapOverride schema reference Used in: LoadBalancerListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS. map loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature. string 13.2.26. LoadBalancerListenerBrokerOverride schema reference Used in: LoadBalancerListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer dnsAnnotations Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature. string 13.2.27. KafkaListenerExternalNodePort schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes use of the KafkaListenerExternalNodePort type from KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalIngress . It must have the value nodeport for the type KafkaListenerExternalNodePort . Property Description type Must be nodeport . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth overrides Overrides for external bootstrap and broker services and externally advertised addresses. NodePortListenerOverride configuration External listener configuration. NodePortListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array tls Enables TLS encryption on the listener. By default set to true for enabled TLS encryption. boolean 13.2.28. NodePortListenerOverride schema reference Used in: KafkaListenerExternalNodePort Property Description bootstrap External bootstrap service configuration. NodePortListenerBootstrapOverride brokers External broker services configuration. NodePortListenerBrokerOverride array 13.2.29. NodePortListenerBootstrapOverride schema reference Used in: NodePortListenerOverride Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS. map nodePort Node port for the bootstrap service. integer 13.2.30. NodePortListenerBrokerOverride schema reference Used in: NodePortListenerOverride Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer nodePort Node port for the broker service. integer dnsAnnotations Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map 13.2.31. NodePortListenerConfiguration schema reference Used in: KafkaListenerExternalNodePort Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource preferredAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) 13.2.32. KafkaListenerExternalIngress schema reference Used in: KafkaListeners The type property is a discriminator that distinguishes use of the KafkaListenerExternalIngress type from KafkaListenerExternalRoute , KafkaListenerExternalLoadBalancer , KafkaListenerExternalNodePort . It must have the value ingress for the type KafkaListenerExternalIngress . Property Description type Must be ingress . string authentication Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth class Configures the Ingress class that defines which Ingress controller will be used. string configuration External listener configuration. IngressListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.33. IngressListenerConfiguration schema reference Used in: KafkaListenerExternalIngress Property Description bootstrap External bootstrap ingress configuration. IngressListenerBootstrapConfiguration brokers External broker ingress configuration. IngressListenerBrokerConfiguration array brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain. CertAndKeySecretSource 13.2.34. IngressListenerBootstrapConfiguration schema reference Used in: IngressListenerConfiguration Property Description address Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. string dnsAnnotations Annotations that will be added to the Ingress resource. You can use this field to configure DNS providers such as External DNS. map host Host for the bootstrap route. This field will be used in the Ingress resource. string 13.2.35. IngressListenerBrokerConfiguration schema reference Used in: IngressListenerConfiguration Property Description broker Id of the kafka broker (broker identifier). integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host Host for the broker ingress. This field will be used in the Ingress resource. string dnsAnnotations Annotations that will be added to the Ingress resources for individual brokers. You can use this field to configure DNS providers such as External DNS. map 13.2.36. EphemeralStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the EphemeralStorage type from PersistentClaimStorage . It must have the value ephemeral for the type EphemeralStorage . Property Description id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer sizeLimit When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). string type Must be ephemeral . string 13.2.37. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Description type Must be persistent-claim . string size When type=persistent-claim, defines the size of the persistent volume claim (i.e 1Gi). Mandatory when type=persistent-claim. string selector Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. map deleteClaim Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. boolean class The storage class to use for dynamic volume allocation. string id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer overrides Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers. PersistentClaimStorageOverride array 13.2.38. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Description class The storage class to use for dynamic volume allocation for this broker. string broker Id of the kafka broker (broker identifier). integer 13.2.39. JbodStorage schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Description type Must be jbod . string volumes List of volumes as Storage objects representing the JBOD disks array. EphemeralStorage , PersistentClaimStorage array 13.2.40. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties Simple authorization in AMQ Streams uses the AclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple , and configure a list of super users. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . 13.2.40.1. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization . An example of simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . 13.2.40.2. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak . It must have the value simple for the type KafkaAuthorizationSimple . Property Description type Must be simple . string superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array 13.2.41. KafkaAuthorizationOpa schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationOpa schema properties To use Open Policy Agent authorization, set the type property in the authorization section to the value opa , and configure OPA properties as required. 13.2.41.1. url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required. 13.2.41.2. allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied. 13.2.41.3. initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000 . 13.2.41.4. maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . 13.2.41.5. expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour). 13.2.41.6. superUsers A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. For more information see Kafka authorization . An example of Open Policy Agent authorizer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward # ... 13.2.41.7. KafkaAuthorizationOpa schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationOpa type from KafkaAuthorizationSimple , KafkaAuthorizationKeycloak . It must have the value opa for the type KafkaAuthorizationOpa . Property Description type Must be opa . string url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. string allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied. boolean initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000 . integer maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . integer expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 . integer superUsers List of super users, which is specifically a list of user principals that have unlimited access rights. string array 13.2.42. KafkaAuthorizationKeycloak schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the KafkaAuthorizationKeycloak type from KafkaAuthorizationSimple , KafkaAuthorizationOpa . It must have the value keycloak for the type KafkaAuthorizationKeycloak . Property Description type Must be keycloak . string clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string tokenEndpointUri Authorization server token endpoint URI. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean delegateToKafkaAcls Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is false . boolean grantsRefreshPeriodSeconds The time between two consecutive grants refresh runs in seconds. The default value is 60. integer grantsRefreshPoolSize The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. integer superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array 13.2.43. Rack schema reference Used in: KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec Full list of Rack schema properties Configures rack awareness to spread partition replicas across different racks. A rack can represent an availability zone, data center, or an actual rack in your data center. By configuring a rack for a Kafka cluster, consumers can fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters. To configure Kafka brokers for rack awareness, you specify a topologyKey value to match the label of the cluster node used by OpenShift when scheduling Kafka broker pods to nodes. If the OpenShift cluster is running on a cloud provider platform, the label must represent the availability zone where the node is running. Usually, nodes are labeled with the topology.kubernetes.io/zone label (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which can be used as the topologyKey value. The rack awareness configuration spreads the broker pods and partition replicas across zones, improving resiliency, and also sets a broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker. Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed. Example rack configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone config: # ... replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector # ... Use the RackAwareReplicaSelector implementation for the Kafka ReplicaSelector plugin if you want clients to consume from the closest replica. The ReplicaSelector plugin provides the logic that enables clients to consume from the nearest replica. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. The default implementation uses LeaderSelector to always select the leader replica for the client. By switching from the leader replica to the replica follower, there is some cost to latency. If required, you can also customize your own implementation. For clients, including Kafka Connect, you specify the same topology key as the broker that the client will use to consume messages. Example rack configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... The client is assigned a client.rack ID. RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, so the client can consume from the nearest replica. Figure 13.1. Example showing client consuming from replicas in the same availability zone If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . 13.2.43.1. Rack schema properties Property Description topologyKey A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set the broker's broker.rack config and client.rack in Kafka Connect. string 13.2.44. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , TopicOperatorSpec , ZookeeperClusterSpec Property Description failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. integer initialDelaySeconds The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0. integer periodSeconds How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. integer successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. integer timeoutSeconds The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1. integer 13.2.45. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec Property Description -XX A map of -XX options to the JVM. map -Xms -Xms option to to the JVM. string -Xmx -Xmx option to to the JVM. string gcLoggingEnabled Specifies whether the Garbage Collection logging is enabled. The default is false. boolean javaSystemProperties A map of additional system properties which will be passed using the -D option to the JVM. SystemProperty array 13.2.46. SystemProperty schema reference Used in: JvmOptions Property Description name The system property name. string value The system property value. string 13.2.47. KafkaJmxOptions schema reference Used in: KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of KafkaJmxOptions schema properties Configures JMX connection options. JMX metrics are obtained from Kafka brokers, Kafka Connect, and MirrorMaker 2.0 by opening a JMX port on 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port. You can then obtain metrics about the component. For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker. To enable security for the JMX port, set the type parameter in the authentication field to password . Example password-protected JMX configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address. For example, to get JMX metrics from broker 0 you specify: " CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers" CLUSTER-NAME -kafka-0 is name of the broker pod, and CLUSTER-NAME -kafka-brokers is the name of the headless service to return the IPs of the broker pods. If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod. For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port. Example open port JMX configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... Additional resources For more information on the Kafka component metrics exposed using JMX, see the Apache Kafka documentation . 13.2.47.1. KafkaJmxOptions schema properties Property Description authentication Authentication configuration for connecting to the JMX port. The type depends on the value of the authentication.type property within the given object, which must be one of [password]. KafkaJmxAuthenticationPassword 13.2.48. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Description type Must be password . string 13.2.49. JmxPrometheusExporterMetrics schema reference Used in: CruiseControlSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the JmxPrometheusExporterMetrics type from other subtypes which may be added in the future. It must have the value jmxPrometheusExporter for the type JmxPrometheusExporterMetrics . Property Description type Must be jmxPrometheusExporter . string valueFrom ConfigMap entry where the Prometheus JMX Exporter configuration is stored. For details of the structure of this configuration, see the JMX Exporter documentation . ExternalConfigurationReference 13.2.50. ExternalConfigurationReference schema reference Used in: ExternalLogging , JmxPrometheusExporterMetrics Property Description configMapKeyRef Reference to the key in the ConfigMap containing the configuration. For more information, see the external documentation for core/v1 configmapkeyselector . ConfigMapKeySelector 13.2.51. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Description type Must be inline . string loggers A Map from logger name to logger level. map 13.2.52. ExternalLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TopicOperatorSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the ExternalLogging type from InlineLogging . It must have the value external for the type ExternalLogging . Property Description type Must be external . string name The name property has been deprecated, and should now be configured using valueFrom . The property name is removed in API version v1beta2 . The name of the ConfigMap from which to get the logging configuration. string valueFrom ConfigMap entry where the logging configuration is stored. ExternalConfigurationReference 13.2.53. TlsSidecar schema reference Used in: CruiseControlSpec , EntityOperatorSpec , KafkaClusterSpec , TopicOperatorSpec , ZookeeperClusterSpec Full list of TlsSidecar schema properties Configures a TLS sidecar, which is a container that runs in a pod, but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt communication between components and ZooKeeper. The TLS sidecar is used in: Entity Operator Cruise Control The TLS sidecar is configured using the tlsSidecar property in: Kafka.spec.entityOperator Kafka.spec.cruiseControl The TLS sidecar supports the following additional options: image resources logLevel readinessProbe livenessProbe The resources property specifies the memory and CPU resources allocated for the TLS sidecar. The image property configures the container image which will be used. The readinessProbe and livenessProbe properties configure healthcheck probes for the TLS sidecar. The logLevel property specifies the logging level. The following logging levels are supported: emerg alert crit err warning notice info debug The default value is notice . Example TLS sidecar configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... entityOperator: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... cruiseControl: # ... tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... 13.2.53.1. TlsSidecar schema properties Property Description image The docker image for the container. string livenessProbe Pod liveness checking. Probe logLevel The log level for the TLS sidecar. Default value is notice . string (one of [emerg, debug, crit, err, alert, warning, notice, info]) readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements 13.2.54. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Description statefulset Template for Kafka StatefulSet . StatefulSetTemplate pod Template for Kafka Pods . PodTemplate bootstrapService Template for Kafka bootstrap Service . ResourceTemplate brokersService Template for Kafka broker Service . ResourceTemplate externalBootstrapService Template for Kafka external bootstrap Service . ExternalServiceTemplate perPodService Template for Kafka per-pod Services used for access from outside of OpenShift. ExternalServiceTemplate externalBootstrapRoute Template for Kafka external bootstrap Route . ResourceTemplate perPodRoute Template for Kafka per-pod Routes used for access from outside of OpenShift. ResourceTemplate externalBootstrapIngress Template for Kafka external bootstrap Ingress . ResourceTemplate perPodIngress Template for Kafka per-pod Ingress used for access from outside of OpenShift. ResourceTemplate persistentVolumeClaim Template for all Kafka PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for Kafka PodDisruptionBudget . PodDisruptionBudgetTemplate kafkaContainer Template for the Kafka broker container. ContainerTemplate tlsSidecarContainer The tlsSidecarContainer property has been deprecated. The property tlsSidecarContainer is removed in API version v1beta2 . Template for the Kafka broker TLS sidecar container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate clusterCaCert Template for Secret with Kafka Cluster certificate public key. ResourceTemplate clusterRoleBinding Template for the Kafka ClusterRoleBinding. ResourceTemplate 13.2.55. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate podManagementPolicy PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel . string (one of [OrderedReady, Parallel]) 13.2.56. MetadataTemplate schema reference Used in: DeploymentTemplate , ExternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Full list of MetadataTemplate schema properties Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured. 13.2.56.1. MetadataTemplate schema properties Property Description labels Labels added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map annotations Annotations added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map 13.2.57. PodTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodTemplate schema properties Configures the template for Kafka pods. Example PodTemplate configuration # ... template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ... 13.2.57.1. hostAliases Use the hostAliases property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts file of the pod. This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users. Example hostAliases configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #... 13.2.57.2. PodTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate imagePullSecrets List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the STRIMZI_IMAGE_PULL_SECRETS environment variable in Cluster Operator and the imagePullSecrets option are specified, only the imagePullSecrets variable is used and the STRIMZI_IMAGE_PULL_SECRETS variable is ignored. For more information, see the external documentation for core/v1 localobjectreference . LocalObjectReference array securityContext Configures pod-level security attributes and common container settings. For more information, see the external documentation for core/v1 podsecuritycontext . PodSecurityContext terminationGracePeriodSeconds The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds. integer affinity The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array priorityClassName The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption . string schedulerName The name of the scheduler used to dispatch this Pod . If not specified, the default scheduler will be used. string hostAliases The pod's HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. For more information, see the external documentation for core/v1 HostAlias . HostAlias array topologySpreadConstraints The pod's topology spread constraints. For more information, see the external documentation for core/v1 topologyspreadconstraint . TopologySpreadConstraint array 13.2.58. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate 13.2.59. ExternalServiceTemplate schema reference Used in: KafkaClusterTemplate Full list of ExternalServiceTemplate schema properties When exposing Kafka outside of OpenShift using loadbalancers or node ports, you can use properties, in addition to labels and annotations, to customize how a Service is created. An example showing customized external services # ... template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... 13.2.59.1. ExternalServiceTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate externalTrafficPolicy The externalTrafficPolicy property has been deprecated, and should now be configured using spec.kafka.listeners[].configuration . The property externalTrafficPolicy is removed in API version v1beta2 . Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default. string (one of [Local, Cluster]) loadBalancerSourceRanges The loadBalancerSourceRanges property has been deprecated, and should now be configured using spec.kafka.listeners[].configuration . The property loadBalancerSourceRanges is removed in API version v1beta2 . A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ . string array 13.2.60. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties AMQ Streams creates a PodDisruptionBudget for every new StatefulSet or Deployment . By default, pod disruption budgets only allow a single pod to be unavailable at a given time. You can increase the amount of unavailable pods allowed by changing the default value of the maxUnavailable property in the PodDisruptionBudget.spec resource. An example of PodDisruptionBudget template # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 13.2.60.1. PodDisruptionBudgetTemplate schema properties Property Description metadata Metadata to apply to the PodDistruptionBugetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer 13.2.61. ContainerTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of ContainerTemplate schema properties You can set custom security context and environment variables for a container. The environment variables are defined under the env property as a list of objects with name and value fields. The following example shows two custom environment variables and a custom security context set for the Kafka broker containers: # ... template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000 # ... Environment variables prefixed with KAFKA_ are internal to AMQ Streams and should be avoided. If you set a custom environment variable that is already in use by AMQ Streams, it is ignored and a warning is recorded in the log. 13.2.61.1. ContainerTemplate schema properties Property Description env Environment variables which should be applied to the container. ContainerEnvVar array securityContext Security context for the container. For more information, see the external documentation for core/v1 securitycontext . SecurityContext 13.2.62. ContainerEnvVar schema reference Used in: ContainerTemplate Property Description name The environment variable key. string value The environment variable value. string 13.2.63. ZookeeperClusterSpec schema reference Used in: KafkaSpec Full list of ZookeeperClusterSpec schema properties Configures a ZooKeeper cluster. 13.2.63.1. config Use the config properties to configure ZooKeeper options as keys. Standard Apache ZooKeeper configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Security (Encryption, Authentication, and Authorization) Listener configuration Configuration of data directories ZooKeeper cluster composition The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the ZooKeeper documentation with the exception of those managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: server. dataDir dataLogDir clientPort authProvider quorum.auth requireClientAuthScheme When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to ZooKeeper. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example ZooKeeper configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" # ... 13.2.63.2. logging ZooKeeper has a configurable logger: zookeeper.root.logger ZooKeeper uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: inline loggers: zookeeper.root.logger: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.63.3. ZookeeperClusterSpec schema properties Property Description replicas The number of pods in the cluster. integer image The docker image for the pods. string storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim]. EphemeralStorage , PersistentClaimStorage config The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification). map affinity The affinity property has been deprecated, and should now be configured using spec.zookeeper.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.zookeeper.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metrics The metrics property has been deprecated, and should now be configured using spec.zookeeper.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for ZooKeeper. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for ZooKeeper cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. ZookeeperClusterTemplate tlsSidecar The tlsSidecar property has been deprecated. The property tlsSidecar is removed in API version v1beta2 . TLS sidecar configuration. The TLS sidecar is not used anymore and this option will be ignored. TlsSidecar 13.2.64. ZookeeperClusterTemplate schema reference Used in: ZookeeperClusterSpec Property Description statefulset Template for ZooKeeper StatefulSet . StatefulSetTemplate pod Template for ZooKeeper Pods . PodTemplate clientService Template for ZooKeeper client Service . ResourceTemplate nodesService Template for ZooKeeper nodes Service . ResourceTemplate persistentVolumeClaim Template for all ZooKeeper PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for ZooKeeper PodDisruptionBudget . PodDisruptionBudgetTemplate zookeeperContainer Template for the ZooKeeper container. ContainerTemplate tlsSidecarContainer The tlsSidecarContainer property has been deprecated. The property tlsSidecarContainer is removed in API version v1beta2 . Template for the Zookeeper server TLS sidecar container. The TLS sidecar is not used anymore and this option will be ignored. ContainerTemplate 13.2.65. TopicOperatorSpec schema reference The type TopicOperatorSpec has been deprecated and is removed in API version v1beta2 . Please use EntityTopicOperatorSpec instead. Used in: KafkaSpec Property Description watchedNamespace The namespace the Topic Operator should watch. string image The image to use for the Topic Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer affinity The affinity property has been deprecated, and should now be configured using spec.entityOperator.template.pod.affinity . The property affinity is removed in API version v1beta2 . Pod affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements topicMetadataMaxAttempts The number of attempts at getting topic metadata. integer tlsSidecar The tlsSidecar property has been deprecated, and should now be configured using spec.entityOperator.tlsSidecar . The property tlsSidecar is removed in API version v1beta2 . TLS sidecar configuration. TlsSidecar logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe startupProbe Pod startup checking. Probe 13.2.66. EntityOperatorSpec schema reference Used in: KafkaSpec Property Description topicOperator Configuration of the Topic Operator. EntityTopicOperatorSpec userOperator Configuration of the User Operator. EntityUserOperatorSpec affinity The affinity property has been deprecated, and should now be configured using spec.entityOperator.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.entityOperator.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array tlsSidecar TLS sidecar configuration. TlsSidecar template Template for Entity Operator resources. The template allows users to specify how is the Deployment and Pods generated. EntityOperatorTemplate 13.2.67. EntityTopicOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityTopicOperatorSpec schema properties Configures the Topic Operator. 13.2.67.1. logging The Topic Operator has a configurable logger: rootLogger.level The Topic Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.topicOperator field of the Kafka resource Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.67.2. EntityTopicOperatorSpec schema properties Property Description watchedNamespace The namespace the Topic Operator should watch. string image The image to use for the Topic Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer startupProbe Pod startup checking. Probe livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements topicMetadataMaxAttempts The number of attempts at getting topic metadata. integer logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions 13.2.68. EntityUserOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityUserOperatorSpec schema properties Configures the User Operator. 13.2.68.1. logging The User Operator has a configurable logger: rootLogger.level The User Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.userOperator field of the Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.68.2. EntityUserOperatorSpec schema properties Property Description watchedNamespace The namespace the User Operator should watch. string image The image to use for the User Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer secretPrefix The prefix that will be added to the KafkaUser name to be used as the Secret name. string livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions 13.2.69. EntityOperatorTemplate schema reference Used in: EntityOperatorSpec Property Description deployment Template for Entity Operator Deployment . ResourceTemplate pod Template for Entity Operator Pods . PodTemplate tlsSidecarContainer Template for the Entity Operator TLS sidecar container. ContainerTemplate topicOperatorContainer Template for the Entity Topic Operator container. ContainerTemplate userOperatorContainer Template for the Entity User Operator container. ContainerTemplate 13.2.70. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Description generateCertificateAuthority If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. boolean generateSecretOwnerReference If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . boolean validityDays The number of days generated certificates should be valid for. The default is 365. integer renewalDays The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. integer certificateExpirationPolicy How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. string (one of [replace-key, renew-certificate]) 13.2.71. CruiseControlSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string tlsSidecar TLS sidecar configuration. TlsSidecar resources CPU and memory resources to reserve for the Cruise Control container. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking for the Cruise Control container. Probe readinessProbe Pod readiness checking for the Cruise Control container. Probe jvmOptions JVM Options for the Cruise Control container. JvmOptions logging Logging configuration (Log4j 2) for Cruise Control. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template to specify how Cruise Control resources, Deployments and Pods , are generated. CruiseControlTemplate brokerCapacity The Cruise Control brokerCapacity configuration. BrokerCapacity config The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations . Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, metric.reporter.topic, partition.metric.sample.store.topic, broker.metric.sample.store.topic,capacity.config.file, self.healing., anomaly.detection., ssl. (with the exception of: ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, webserver.http.cors.enabled,webserver.http.cors.origin, webserver.http.cors.exposeheaders). map metrics The metrics property has been deprecated, and should now be configured using spec.cruiseControl.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics 13.2.72. CruiseControlTemplate schema reference Used in: CruiseControlSpec Property Description deployment Template for Cruise Control Deployment . ResourceTemplate pod Template for Cruise Control Pods . PodTemplate apiService Template for Cruise Control API Service . ResourceTemplate podDisruptionBudget Template for Cruise Control PodDisruptionBudget . PodDisruptionBudgetTemplate cruiseControlContainer Template for the Cruise Control container. ContainerTemplate tlsSidecarContainer Template for the Cruise Control TLS sidecar container. ContainerTemplate 13.2.73. BrokerCapacity schema reference Used in: CruiseControlSpec Property Description disk Broker capacity for disk in bytes, for example, 100Gi. string cpuUtilization Broker capacity for CPU resource utilization as a percentage (0 - 100). integer inboundNetwork Broker capacity for inbound network throughput in bytes per second, for example, 10000KB/s. string outboundNetwork Broker capacity for outbound network throughput in bytes per second, for example 10000KB/s. string 13.2.74. KafkaExporterSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string groupRegex Regular expression to specify which consumer groups to collect. Default value is .* . string topicRegex Regular expression to specify which topics to collect. Default value is .* . string resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements logging Only log messages with the given severity or above. Valid levels: [ debug , info , warn , error , fatal ]. Default log level is info . string enableSaramaLogging Enable Sarama logging, a Go client library used by the Kafka Exporter. boolean template Customization of deployment templates and pods. KafkaExporterTemplate livenessProbe Pod liveness check. Probe readinessProbe Pod readiness check. Probe 13.2.75. KafkaExporterTemplate schema reference Used in: KafkaExporterSpec Property Description deployment Template for Kafka Exporter Deployment . ResourceTemplate pod Template for Kafka Exporter Pods . PodTemplate service Template for Kafka Exporter Service . ResourceTemplate container Template for the Kafka Exporter container. ContainerTemplate 13.2.76. KafkaStatus schema reference Used in: Kafka Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer listeners Addresses of the internal and external listeners. ListenerStatus array clusterId Kafka cluster Id. string 13.2.77. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus Property Description type The unique identifier of a condition, used to distinguish between other conditions in the resource. string status The status of the condition, either True, False or Unknown. string lastTransitionTime Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. string reason The reason for the condition's last transition (a single word in CamelCase). string message Human-readable message indicating details about the condition's last transition. string 13.2.78. ListenerStatus schema reference Used in: KafkaStatus Property Description type The type of the listener. Can be one of the following three types: plain , tls , and external . string addresses A list of the addresses for this listener. ListenerAddress array bootstrapServers A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener. string certificates A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners. string array 13.2.79. ListenerAddress schema reference Used in: ListenerStatus Property Description host The DNS name or IP address of the Kafka bootstrap service. string port The port of the Kafka bootstrap service. integer 13.2.80. KafkaConnect schema reference Property Description spec The specification of the Kafka Connect cluster. KafkaConnectSpec status The status of the Kafka Connect cluster. KafkaConnectStatus 13.2.81. KafkaConnectSpec schema reference Used in: KafkaConnect Full list of KafkaConnectSpec schema properties Configures a Kafka Connect cluster. 13.2.81.1. config Use the config properties to configure Kafka options as keys. Standard Apache Kafka Connect configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Listener / REST interface configuration Plugin path configuration The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. listeners plugin.path rest. bootstrap.servers When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka Connect. Important The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties. There are exceptions to the forbidden options. You can use three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 13.2.81.2. logging Kafka Connect (and Kafka Connect with Source2Image support) has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.81.3. KafkaConnectSpec schema properties Property Description version The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string tls TLS configuration. KafkaConnectTls authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions affinity The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The metrics property has been deprecated, and should now be configured using spec.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration build Configures how the Connect container image should be built. Optional. Build clientRackInitImage The image of the init container used for initializing the client.rack . string metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack 13.2.82. KafkaConnectTls schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec Full list of KafkaConnectTls schema properties Configures TLS trusted certificates for connecting Kafka Connect to the cluster. 13.2.82.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . 13.2.82.2. KafkaConnectTls schema properties Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.83. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure TLS client authentication, set the type property to the value tls . TLS client authentication uses a TLS certificate to authenticate. 13.2.83.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note TLS client authentication can only be used with TLS connections. Example TLS client authentication configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 13.2.83.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Description certificateAndKey Reference to the Secret which holds the certificate and private key pair. CertAndKeySecretSource type Must be tls . string 13.2.84. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha512 schema properties To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. 13.2.84.1. username Specify the username in the username property. 13.2.84.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 13.2.84.3. KafkaClientAuthenticationScramSha512 schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationScramSha512 type from KafkaClientAuthenticationTls , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaClientAuthenticationScramSha512 . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-512 . string username Username used for the authentication. string 13.2.85. PasswordSecretSource schema reference Used in: KafkaClientAuthenticationPlain , KafkaClientAuthenticationScramSha512 Property Description password The name of the key in the Secret under which the password is stored. string secretName The name of the Secret containing the password. string 13.2.86. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationPlain schema properties To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. 13.2.86.1. username Specify the username in the username property. 13.2.86.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name 13.2.86.3. KafkaClientAuthenticationPlain schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be plain . string username Username used for the authentication. string 13.2.87. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationOAuth schema properties To configure OAuth client authentication, set the type property to oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. + .An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resoruce. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true 13.2.87.1. KafkaClientAuthenticationOAuth schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Description accessToken Link to OpenShift Secret containing the access token which was obtained from the authorization server. GenericSecretSource accessTokenIsJwt Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . boolean clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. GenericSecretSource disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean maxTokenExpirySeconds Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. integer refreshToken Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. GenericSecretSource scope OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri Authorization server token endpoint URI. string type Must be oauth . string 13.2.88. JaegerTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the JaegerTracing type from other subtypes which may be added in the future. It must have the value jaeger for the type JaegerTracing . Property Description type Must be jaeger . string 13.2.89. KafkaConnectTemplate schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Property Description deployment Template for Kafka Connect Deployment . DeploymentTemplate pod Template for Kafka Connect Pods . PodTemplate apiService Template for Kafka Connect API Service . ResourceTemplate buildConfig Template for the Kafka Connect BuildConfig used to build new container images. The BuildConfig is used only on OpenShift. ResourceTemplate buildContainer Template for the Kafka Connect Build container. The build container is used only on OpenShift. ContainerTemplate buildPod Template for Kafka Connect Build Pods . The build pod is used only on OpenShift. PodTemplate clusterRoleBinding Template for the Kafka Connect ClusterRoleBinding. ResourceTemplate connectContainer Template for the Kafka Connect container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate podDisruptionBudget Template for Kafka Connect PodDisruptionBudget . PodDisruptionBudgetTemplate 13.2.90. DeploymentTemplate schema reference Used in: KafkaBridgeTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate deploymentStrategy DeploymentStrategy which will be used for this Deployment. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate . string (one of [RollingUpdate, Recreate]) 13.2.91. ExternalConfiguration schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of ExternalConfiguration schema properties Configures external storage properties that define configuration options for Kafka Connect connectors. You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec and KafkaConnectS2I.spec . When applied, the environment variables and volumes are available for use when developing your connectors. 13.2.91.1. env The env property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret. Example Secret containing values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . To mount a value from a Secret to an environment variable, use the valueFrom property and the secretKeyRef . Example environment variables set to values from a Secret apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with credentials. To mount a value from a ConfigMap to an environment variable, use configMapKeyRef in the valueFrom property as shown in the following example. Example environment variables set to values from a ConfigMap apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key 13.2.91.2. volumes You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes. Using volumes instead of environment variables is useful in the following scenarios: Mounting truststores or keystores with TLS certificates Mounting a properties file that is used to configure Kafka Connect connectors Example Secret with properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-user 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. In this example, a Secret named mysecret is mounted to a volume named connector-config . In the config property, a configuration provider ( FileConfigProvider ) is specified, which will load configuration values from external sources. The Kafka FileConfigProvider is given the alias file , and will read and extract database username and password property values from the file to use in the connector configuration. Example external volumes set to values from a Secret apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4 1 The alias for the configuration provider, which is used to define other configuration parameters. Use a comma-separated list if you want to add more than one provider. 2 The FileConfigProvider is the configuration provider that provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the Secret. Each volume must specify a name in the name property and a reference to ConfigMap or Secret. 4 The name of the Secret. The volumes are mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/ <volume-name> . For example, the files from a volume named connector-config would appear in the directory /opt/kafka/external-configuration/connector-config . The FileConfigProvider is used to read the values from the mounted properties files in connector configurations. 13.2.91.3. ExternalConfiguration schema properties Property Description env Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as environment variables. ExternalConfigurationEnv array volumes Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as volumes. ExternalConfigurationVolumeSource array 13.2.92. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Description name Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . string valueFrom Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. ExternalConfigurationEnvVarSource 13.2.93. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Description configMapKeyRef Reference to a key in a ConfigMap. For more information, see the external documentation for core/v1 configmapkeyselector . ConfigMapKeySelector secretKeyRef Reference to a key in a Secret. For more information, see the external documentation for core/v1 secretkeyselector . SecretKeySelector 13.2.94. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Description configMap Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 configmapvolumesource . ConfigMapVolumeSource name Name of the volume which will be added to the Kafka Connect pods. string secret Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 secretvolumesource . SecretVolumeSource 13.2.95. Build schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec Full list of Build schema properties Configures additional connectors for Kafka Connect deployments. 13.2.95.1. output To build new container images with additional connector plugins, AMQ Streams requires a container registry where the images can be pushed to, stored, and pulled from. AMQ Streams does not run its own container registry, so a registry must be provided. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub . The container registry is configured in the .spec.build.output section of the KafkaConnect custom resource. The output configuration, which is required, supports two types: docker and imagestream . Using Docker registry To use a Docker registry, you have to specify the type as docker , and the image field with the full name of the new container image. The full name must include: The address of the registry Port number (if listening on a non-standard port) The tag of the new container image Example valid container image names: docker.io/my-org/my-image/my-tag quay.io/my-org/my-image/my-tag image-registry.image-registry.svc:5000/myproject/kafka-connect-build:latest Each Kafka Connect deployment must use a separate image, which can mean different tags at the most basic level. If the registry requires authentication, use the pushSecret to set a name of the Secret with the registry credentials. For the Secret, use the kubernetes.io/dockerconfigjson type and a .dockerconfigjson file to contain the Docker credentials. For more information on pulling an image from a private registry, see Create a Secret based on existing Docker credentials . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #... 1 (Required) Type of output used by AMQ Streams. 2 (Required) Full name of the image used, including the repository and tag. 3 (Optional) Name of the secret with the container registry credentials. Using OpenShift ImageStream Instead of Docker, you can use OpenShift ImageStream to store a new container image. The ImageStream has to be created manually before deploying Kafka Connect. To use ImageStream, set the type to imagestream , and use the image property to specify the name of the ImageStream and the tag used. For example, my-connect-image-stream:latest . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: imagestream 1 image: my-connect-build:latest 2 #... 1 (Required) Type of output used by AMQ Streams. 2 (Required) Name of the ImageStream and tag. 13.2.95.2. plugins Connector plugins are a set of files that define the implementation required to connect to certain types of external system. The connector plugins required for a container image must be configured using the .spec.build.plugins property of the KafkaConnect custom resource. Each connector plugin must have a name which is unique within the Kafka Connect deployment. Additionally, the plugin artifacts must be listed. These artifacts are downloaded by AMQ Streams, added to the new container image, and used in the Kafka Connect deployment. The connector plugin artifacts can also include additional components, such as (de)serializers. Each connector plugin is downloaded into a separate directory so that the different connectors and their dependencies are properly sandboxed . Each plugin must be configured with at least one artifact . Example plugins configuration with two connector plugins apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: 1 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #... 1 (Required) List of connector plugins and their artifacts. AMQ Streams supports two types of artifacts: * JAR files, which are downloaded and used directly * TGZ archives, which are downloaded and unpacked Important AMQ Streams does not perform any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually, and configure the checksum verification to make sure the same artifact is used in the automated build and in the Kafka Connect deployment. Using JAR artifacts JAR artifacts represent a resource which is downloaded and added to a container image. JAR artifacts are mainly used for downloading JAR files, but they can also used to download other file types. To use a JAR artifacts, set the type property to jar , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum of the artifact while building the new container image. Example JAR artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using TGZ artifacts TGZ artifacts are used to download TAR archives that have been compressed using Gzip compression. The TGZ artifact can contain the whole Kafka Connect connector, even when comprising multiple different files. The TGZ artifact is automatically downloaded and unpacked by AMQ Streams while building the new container image. To use TGZ artifacts, set the type property to tgz , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum before unpacking it and building the new container image. Example TGZ artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.jar 2 sha512sum: 158...jg10 3 #... 1 (Required) Type of artifact. 2 (Required) URL from which the archive is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. 13.2.95.3. Build schema properties Property Description output Configures where should the newly built image be stored. Required. The type depends on the value of the output.type property within the given object, which must be one of [docker, imagestream]. DockerOutput , ImageStreamOutput resources CPU and memory resources to reserve for the build. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements plugins List of connector plugins which should be added to the Kafka Connect. Required. Plugin array 13.2.96. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Description image The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. string pushSecret Container Registry Secret with the credentials for pushing the newly built image. string additionalKanikoOptions Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. string array type Must be docker . string 13.2.97. ImageStreamOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the ImageStreamOutput type from DockerOutput . It must have the value imagestream for the type ImageStreamOutput . Property Description image The name and tag of the ImageStream where the newly built image will be pushed. For example my-custom-connect:latest . Required. string type Must be imagestream . string 13.2.98. Plugin schema reference Used in: Build Property Description name The unique name of the connector plugin. Will be used to generate the path where the connector artifacts will be stored. The name has to be unique within the KafkaConnect resource. The name has to follow the following pattern: ^[a-z][-_a-z0-9]*[a-z]USD . Required. string artifacts List of artifacts which belong to this connector plugin. Required. JarArtifact , TgzArtifact , ZipArtifact array 13.2.99. JarArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be jar . string 13.2.100. TgzArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be tgz . string 13.2.101. ZipArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be zip . string 13.2.102. KafkaConnectStatus schema reference Used in: KafkaConnect Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.103. ConnectorPlugin schema reference Used in: KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status Property Description type The type of the connector plugin. The available types are sink and source . string version The version of the connector plugin. string class The class of the connector plugin. string 13.2.104. KafkaConnectS2I schema reference The type KafkaConnectS2I has been deprecated. Please use Build instead. Property Description spec The specification of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2ISpec status The status of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2IStatus 13.2.105. KafkaConnectS2ISpec schema reference Used in: KafkaConnectS2I Full list of KafkaConnectS2ISpec schema properties Configures a Kafka Connect cluster with Source-to-Image (S2I) support. When extending Kafka Connect with connector plugins on OpenShift (only), you can use OpenShift builds and S2I to create a container image that is used by the Kafka Connect deployment. The configuration options are similar to Kafka Connect configuration using the KafkaConnectSpec schema . 13.2.105.1. KafkaConnectS2ISpec schema properties Property Description version The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string buildResources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string tls TLS configuration. KafkaConnectTls authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions affinity The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The metrics property has been deprecated, and should now be configured using spec.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration build Configures how the Connect container image should be built. Optional. Build clientRackInitImage The image of the init container used for initializing the client.rack . string insecureSourceRepository When true this configures the source repository with the 'Local' reference policy and an import policy that accepts insecure source tags. boolean metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack 13.2.106. KafkaConnectS2IStatus schema reference Used in: KafkaConnectS2I Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array buildConfigName The name of the build configuration. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.107. KafkaTopic schema reference Property Description spec The specification of the topic. KafkaTopicSpec status The status of the topic. KafkaTopicStatus 13.2.108. KafkaTopicSpec schema reference Used in: KafkaTopic Property Description partitions The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. integer replicas The number of replicas the topic should have. integer config The topic configuration. map topicName The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. string 13.2.109. KafkaTopicStatus schema reference Used in: KafkaTopic Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer 13.2.110. KafkaUser schema reference Property Description spec The specification of the user. KafkaUserSpec status The status of the Kafka User. KafkaUserStatus 13.2.111. KafkaUserSpec schema reference Used in: KafkaUser Property Description authentication Authentication mechanism enabled for this Kafka user. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512]. KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication authorization Authorization rules for this Kafka user. The type depends on the value of the authorization.type property within the given object, which must be one of [simple]. KafkaUserAuthorizationSimple quotas Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas . KafkaUserQuotas template Template to specify how Kafka User Secrets are generated. KafkaUserTemplate 13.2.112. KafkaUserTlsClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsClientAuthentication type from KafkaUserScramSha512ClientAuthentication . It must have the value tls for the type KafkaUserTlsClientAuthentication . Property Description type Must be tls . string 13.2.113. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Description type Must be scram-sha-512 . string 13.2.114. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Description type Must be simple . string acls List of ACL rules which should be applied to this user. AclRule array 13.2.115. AclRule schema reference Used in: KafkaUserAuthorizationSimple Full list of AclRule schema properties Configures access control rule for a KafkaUser when brokers are using the AclAuthorizer . Example KafkaUser configuration with authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read 13.2.115.1. resource Use the resource property to specify the resource that the rule applies to. Simple authorization supports four resource types, which are specified in the type property: Topics ( topic ) Consumer Groups ( group ) Clusters ( cluster ) Transactional IDs ( transactionalId ) For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property. Cluster type resources have no name. A name is specified as a literal or a prefix using the patternType property. Literal names are taken exactly as they are specified in the name field. Prefix names use the value from the name as a prefix, and will apply the rule to all resources with names starting with the value. 13.2.115.2. type The type of rule, which is to allow or deny (not currently supported) an operation. The type field is optional. If type is unspecified, the ACL rule is treated as an allow rule. 13.2.115.3. operation Specify an operation for the rule to allow or deny. The following operations are supported: Read Write Delete Alter Describe All IdempotentWrite ClusterAction Create AlterConfigs DescribeConfigs Only certain operations work with each resource. For more details about AclAuthorizer , ACLs and supported combinations of resources and operations, see Authorization and ACLs . 13.2.115.4. host Use the host property to specify a remote host from which the rule is allowed or denied. Use an asterisk ( * ) to allow or deny the operation from all hosts. The host field is optional. If host is unspecified, the * value is used by default. 13.2.115.5. AclRule schema properties Property Description host The host from which the action described in the ACL rule is allowed or denied. string operation Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) resource Indicates the resource for which given ACL rule applies. The type depends on the value of the resource.type property within the given object, which must be one of [topic, group, cluster, transactionalId]. AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource type The type of the rule. Currently the only supported type is allow . ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow . string (one of [allow, deny]) 13.2.116. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Description type Must be topic . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.117. AclRuleGroupResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleGroupResource type from AclRuleTopicResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value group for the type AclRuleGroupResource . Property Description type Must be group . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.118. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Description type Must be cluster . string 13.2.119. AclRuleTransactionalIdResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTransactionalIdResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource . It must have the value transactionalId for the type AclRuleTransactionalIdResource . Property Description type Must be transactionalId . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.120. KafkaUserQuotas schema reference Used in: KafkaUserSpec Full list of KafkaUserQuotas schema properties Kafka allows a user to set quotas to control the use of resources by clients. 13.2.120.1. quotas Quotas split into two categories: Network usage quotas, which are defined as the byte rate threshold for each group of clients sharing a quota CPU utilization quotas, which are defined as the percentage of time a client can utilize on request handler I/O threads and network threads of each broker within a quota window Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients. AMQ Streams supports user-level quotas, but not client-level quotas. An example Kafka user quotas spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 For more info about Kafka user quotas, refer to the Apache Kafka documentation . 13.2.120.2. KafkaUserQuotas schema properties Property Description consumerByteRate A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. integer producerByteRate A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. integer requestPercentage A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. integer 13.2.121. KafkaUserTemplate schema reference Used in: KafkaUserSpec Full list of KafkaUserTemplate schema properties Specify additional labels and annotations for the secret created by the User Operator. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... 13.2.121.1. KafkaUserTemplate schema properties Property Description secret Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated. ResourceTemplate 13.2.122. KafkaUserStatus schema reference Used in: KafkaUser Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer username Username. string secret The name of Secret where the credentials are stored. string 13.2.123. KafkaMirrorMaker schema reference Property Description spec The specification of Kafka MirrorMaker. KafkaMirrorMakerSpec status The status of Kafka MirrorMaker. KafkaMirrorMakerStatus 13.2.124. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Full list of KafkaMirrorMakerSpec schema properties Configures Kafka MirrorMaker. 13.2.124.1. whitelist Use the whitelist property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. 13.2.124.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. 13.2.124.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: "INFO" # ... apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.124.4. KafkaMirrorMakerSpec schema properties Property Description version The Kafka MirrorMaker version. Defaults to 2.7.0. Consult the documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Deployment . integer image The docker image for the pods. string consumer Configuration of source cluster. KafkaMirrorMakerConsumerSpec producer Configuration of target cluster. KafkaMirrorMakerProducerSpec resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements whitelist List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the whitelist 'A|B' . Or, as a special case, you can mirror all topics using the whitelist '*'. You can also specify multiple regular expressions separated by commas. string affinity The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array jvmOptions JVM Options for pods. JvmOptions logging Logging configuration for MirrorMaker. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The metrics property has been deprecated, and should now be configured using spec.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See JMX Exporter documentation for details of the structure of this configuration. map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics tracing The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. KafkaMirrorMakerTemplate livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe 13.2.125. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 13.2.125.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 13.2.125.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 13.2.125.3. config Use the consumer.config properties to configure Kafka options for the consumer. The config property contains the Kafka MirrorMaker consumer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers group.id interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. 13.2.125.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 13.2.125.5. KafkaMirrorMakerConsumerSpec schema properties Property Description numStreams Specifies the number of consumer stream threads to create. integer offsetCommitInterval Specifies the offset auto-commit interval in ms. Default value is 60000. integer bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string groupId A unique string that identifies the consumer group this consumer belongs to. string authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls 13.2.126. KafkaMirrorMakerTls schema reference Used in: KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaMirrorMakerTls schema properties Configures TLS trusted certificates for connecting MirrorMaker to the cluster. 13.2.126.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . 13.2.126.2. KafkaMirrorMakerTls schema properties Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.127. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerProducerSpec schema properties Configures a MirrorMaker producer. 13.2.127.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. 13.2.127.2. config Use the producer.config properties to configure Kafka options for the producer. The config property contains the Kafka MirrorMaker producer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.producer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. 13.2.127.3. KafkaMirrorMakerProducerSpec schema properties Property Description bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string abortOnSendFailure Flag to set the MirrorMaker to exit on a failed send. Default value is true . boolean authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls 13.2.128. KafkaMirrorMakerTemplate schema reference Used in: KafkaMirrorMakerSpec Property Description deployment Template for Kafka MirrorMaker Deployment . DeploymentTemplate pod Template for Kafka MirrorMaker Pods . PodTemplate mirrorMakerContainer Template for Kafka MirrorMaker container. ContainerTemplate podDisruptionBudget Template for Kafka MirrorMaker PodDisruptionBudget . PodDisruptionBudgetTemplate 13.2.129. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.130. KafkaBridge schema reference Property Description spec The specification of the Kafka Bridge. KafkaBridgeSpec status The status of the Kafka Bridge. KafkaBridgeStatus 13.2.131. KafkaBridgeSpec schema reference Used in: KafkaBridge Full list of KafkaBridgeSpec schema properties Configures a Kafka Bridge cluster. Configuration options relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Consumer configuration Producer configuration HTTP configuration 13.2.131.1. logging Kafka Bridge has its own configurable loggers: logger.bridge logger. <operation-id> You can replace <operation-id> in the logger. <operation-id> logger to set log levels for specific operations: createConsumer deleteConsumer subscribe unsubscribe poll assign commit send sendToPartition seekToBeginning seekToEnd seek healthy ready openapi Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests. Each logger has to be configured assigning it a name as http.openapi.operation. <operation-id> . For example, configuring the logging level for the send operation logger means defining the following: Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints: The log level of all other operations is set to INFO by default. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. The logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. Default logging is used if the name or key is not set. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: inline loggers: logger.bridge.level: "INFO" # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: "DEBUG" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties # ... Any available loggers that are not configured have their level set to OFF . If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.131.2. KafkaBridgeSpec schema properties Property Description replicas The number of pods in the Deployment . integer image The docker image for the pods. string bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string tls TLS configuration for connecting Kafka Bridge to the cluster. KafkaBridgeTls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth http The HTTP related configuration. KafkaBridgeHttpConfig consumer Kafka consumer related configuration. KafkaBridgeConsumerSpec producer Kafka producer related configuration. KafkaBridgeProducerSpec resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements jvmOptions Currently not supported JVM Options for pods. JvmOptions logging Logging configuration for Kafka Bridge. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging enableMetrics Enable the metrics for the Kafka Bridge. Default is false. boolean livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe template Template for Kafka Bridge resources. The template allows users to specify how is the Deployment and Pods generated. KafkaBridgeTemplate tracing The configuration of tracing in Kafka Bridge. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing 13.2.132. KafkaBridgeTls schema reference Used in: KafkaBridgeSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.133. KafkaBridgeHttpConfig schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeHttpConfig schema properties Configures HTTP access to a Kafka cluster for the Kafka Bridge. The default HTTP configuration is for the Kafka Bridge to listen on port 8080. 13.2.133.1. cors As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression. Example Kafka Bridge HTTP configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ... 13.2.133.2. KafkaBridgeHttpConfig schema properties Property Description port The port which is the server listening on. integer cors CORS configuration for the HTTP Bridge. KafkaBridgeHttpCors 13.2.134. KafkaBridgeHttpCors schema reference Used in: KafkaBridgeHttpConfig Property Description allowedOrigins List of allowed origins. Java regular expressions can be used. string array allowedMethods List of allowed HTTP methods. string array 13.2.135. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeConsumerSpec schema properties Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers group.id When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... 13.2.135.1. KafkaBridgeConsumerSpec schema properties Property Description config The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.136. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for producers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... 13.2.136.1. KafkaBridgeProducerSpec schema properties Property Description config The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.137. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Description deployment Template for Kafka Bridge Deployment . DeploymentTemplate pod Template for Kafka Bridge Pods . PodTemplate apiService Template for Kafka Bridge API Service . ResourceTemplate bridgeContainer Template for the Kafka Bridge container. ContainerTemplate podDisruptionBudget Template for Kafka Bridge PodDisruptionBudget . PodDisruptionBudgetTemplate 13.2.138. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL at which external client applications can access the Kafka Bridge. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.139. KafkaConnector schema reference Property Description spec The specification of the Kafka Connector. KafkaConnectorSpec status The status of the Kafka Connector. KafkaConnectorStatus 13.2.140. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Description class The Class for the Kafka Connector. string tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean 13.2.141. KafkaConnectorStatus schema reference Used in: KafkaConnector Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer connectorStatus The connector status, as reported by the Kafka Connect REST API. map tasksMax The maximum number of tasks for the Kafka Connector. integer topics The list of topics used by the Kafka Connector. string array 13.2.142. KafkaMirrorMaker2 schema reference Property Description spec The specification of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Spec status The status of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Status 13.2.143. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Description version The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string connectCluster The cluster alias used for Kafka Connect. The alias must match a cluster in the list at spec.clusters . string clusters Kafka clusters for mirroring. KafkaMirrorMaker2ClusterSpec array mirrors Configuration of the MirrorMaker 2.0 connectors. KafkaMirrorMaker2MirrorSpec array resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions affinity The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity . The property affinity is removed in API version v1beta2 . The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations . The property tolerations is removed in API version v1beta2 . The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metrics The metrics property has been deprecated, and should now be configured using spec.metricsConfig . The property metrics is removed in API version v1beta2 . The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. map tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics 13.2.144. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Full list of KafkaMirrorMaker2ClusterSpec schema properties Configures Kafka clusters for mirroring. 13.2.144.1. config Use the config properties to configure Kafka options. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 13.2.144.2. KafkaMirrorMaker2ClusterSpec schema properties Property Description alias Alias used to reference the Kafka cluster. string bootstrapServers A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. string tls TLS configuration for connecting MirrorMaker 2.0 connectors to a cluster. KafkaMirrorMaker2Tls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker 2.0 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.145. KafkaMirrorMaker2Tls schema reference Used in: KafkaMirrorMaker2ClusterSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.146. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Description sourceCluster The alias of the source cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string targetCluster The alias of the target cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string sourceConnector The specification of the Kafka MirrorMaker 2.0 source connector. KafkaMirrorMaker2ConnectorSpec heartbeatConnector The specification of the Kafka MirrorMaker 2.0 heartbeat connector. KafkaMirrorMaker2ConnectorSpec checkpointConnector The specification of the Kafka MirrorMaker 2.0 checkpoint connector. KafkaMirrorMaker2ConnectorSpec topicsPattern A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. string topicsBlacklistPattern A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string groupsPattern A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. string groupsBlacklistPattern A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string 13.2.147. KafkaMirrorMaker2ConnectorSpec schema reference Used in: KafkaMirrorMaker2MirrorSpec Property Description tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean 13.2.148. KafkaMirrorMaker2Status schema reference Used in: KafkaMirrorMaker2 Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array connectors List of MirrorMaker 2.0 connector statuses, as reported by the Kafka Connect REST API. map array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.149. KafkaRebalance schema reference Property Description spec The specification of the Kafka rebalance. KafkaRebalanceSpec status The status of the Kafka rebalance. KafkaRebalanceStatus 13.2.150. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Description goals A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. string array skipHardGoalCheck Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. boolean excludedTopics A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported formar consult the documentation for that class. string concurrentPartitionMovementsPerBroker The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. integer concurrentIntraBrokerPartitionMovements The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. integer concurrentLeaderMovements The upper bound of ongoing partition leadership movements. Default is 1000. integer replicationThrottle The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. integer replicaMovementStrategies A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. string array 13.2.151. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer sessionId The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. string optimizationResult A JSON object describing the optimization result. map | [
"spec: config: ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 1 ssl.enabled.protocols: \"TLSv1.2\" 2 ssl.protocol: \"TLSv1.2\" 3 ssl.endpoint.identification.algorithm: HTTPS 4",
"create secret generic MY-SECRET --from-file= MY-TLS-CERTIFICATE-FILE.crt",
"tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt",
"tls: trustedCertificates: []",
"resources: requests: cpu: 12 memory: 64Gi",
"resources: limits: cpu: 12 memory: 64Gi",
"resources: requests: cpu: 500m limits: cpu: 2.5",
"resources: requests: memory: 512Mi limits: memory: 2Gi",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #",
"readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5",
"kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: \"USD3\" topic: \"USD4\" partition: \"USD5\" # further configuration",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # zookeeper: #",
"jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\"",
"jvmOptions: \"-XX\": \"UseG1GC\": true \"MaxGCPauseMillis\": 20 \"InitiatingHeapOccupancyPercent\": 35 \"ExplicitGCInvokesConcurrent\": true",
"-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC",
"jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl",
"jvmOptions: gcLoggingEnabled: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" zookeeper.connection.timeout.ms: 6000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: \"INFO\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #",
"listeners: - name: plain port: 9092 type: internal tls: false",
"# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #",
"# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #",
"# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #",
"# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #",
"# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #",
"get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'",
"listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2",
"listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 1 tls: true",
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: # - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #",
"listeners: # - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal #",
"listeners: # - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #",
"listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #",
"listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2",
"listeners: # - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com",
"listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com",
"listeners: # - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002",
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3",
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\"",
"listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: #",
"\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # cruiseControl: # tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 #",
"template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2",
"template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #",
"template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32",
"template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1",
"template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: inline loggers: zookeeper.root.logger: \"INFO\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #",
"curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: \"INFO\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #",
"create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key",
"authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key",
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field",
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm",
"authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token",
"authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true",
"apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-user 2 dbPassword: my-password",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: imagestream 1 image: my-connect-build:latest 2 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: 1 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.jar 2 sha512sum: 158...jg10 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read",
"spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: \"INFO\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #",
"logger.send.name = http.openapi.operation.send logger.send.level = DEBUG",
"logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: inline loggers: logger.bridge.level: \"INFO\" # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: \"DEBUG\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/api_reference-str |
Chapter 1. Getting started with RPM packaging | Chapter 1. Getting started with RPM packaging The following section introduces the concept of RPM packaging and its main advantages. 1.1. Introduction to RPM packaging The RPM Package Manager (RPM) is a package management system that runs on RHEL, CentOS, and Fedora. You can use RPM to distribute, manage, and update software that you create for any of the operating systems mentioned above. 1.2. RPM advantages The RPM package management system brings several advantages over distribution of software in conventional archive files. RPM enables you to: Install, reinstall, remove, upgrade and verify packages with standard package management tools, such as Yum or PackageKit. Use a database of installed packages to query and verify packages. Use metadata to describe packages, their installation instructions, and other package parameters. Package software sources, patches and complete build instructions into source and binary packages. Add packages to Yum repositories. Digitally sign your packages by using GNU Privacy Guard (GPG) signing keys. 1.3. Creating your first rpm package Creating an RPM package can be complicated. Here is a complete, working RPM Spec file with several things skipped and simplified. Name: hello-world Version: 1 Release: 1 Summary: Most simple RPM package License: FIXME %description This is my first RPM package, which does nothing. %prep # we have no source, so nothing here %build cat > hello-world.sh <<EOF #!/usr/bin/bash echo Hello world EOF %install mkdir -p %{buildroot}/usr/bin/ install -m 755 hello-world.sh %{buildroot}/usr/bin/hello-world.sh %files /usr/bin/hello-world.sh %changelog # let's skip this for now Save this file as hello-world.spec . Now use these commands: USD rpmdev-setuptree USD rpmbuild -ba hello-world.spec The command rpmdev-setuptree creates several working directories. As those directories are stored permanently in USDHOME, this command does not need to be used again. The command rpmbuild creates the actual rpm package. The output of this command can be similar to: ... [SNIP] Wrote: /home/<username>/rpmbuild/SRPMS/hello-world-1-1.src.rpm Wrote: /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wgaJzv + umask 022 + cd /home/<username>/rpmbuild/BUILD + /usr/bin/rm -rf /home/<username>/rpmbuild/BUILDROOT/hello-world-1-1.x86_64 + exit 0 The file /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm is your first RPM package. It can be installed in the system and tested. | [
"Name: hello-world Version: 1 Release: 1 Summary: Most simple RPM package License: FIXME %description This is my first RPM package, which does nothing. %prep we have no source, so nothing here %build cat > hello-world.sh <<EOF #!/usr/bin/bash echo Hello world EOF %install mkdir -p %{buildroot}/usr/bin/ install -m 755 hello-world.sh %{buildroot}/usr/bin/hello-world.sh %files /usr/bin/hello-world.sh %changelog let's skip this for now",
"rpmdev-setuptree rpmbuild -ba hello-world.spec",
"... [SNIP] Wrote: /home/<username>/rpmbuild/SRPMS/hello-world-1-1.src.rpm Wrote: /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wgaJzv + umask 022 + cd /home/<username>/rpmbuild/BUILD + /usr/bin/rm -rf /home/<username>/rpmbuild/BUILDROOT/hello-world-1-1.x86_64 + exit 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/rpm_packaging_guide/getting-started-with-rpm-packaging |
5.48. device-mapper-multipath | 5.48. device-mapper-multipath 5.48.1. RHBA-2012:1111 - device-mapper-multipath bug fix update Updated device-mapper-multipath packages that fix one bug are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Bug Fix BZ# 837594 When a multipath vector (a dynamically allocated array) was resized to a smaller size, device-mapper-multipath did not reassign the pointer to the array. If the array location was changed by reducing its size, device-mapper-multipath corrupted its memory. With this update, device-mapper-multipath correctly reassigns the pointer in this scenario, and memory corruption no longer occurs. All users of device-mapper-multipath are advised to upgrade to these updated packages, which fix this bug. 5.48.2. RHBA-2012:0946 - device-mapper-multipath bug fix and enhancement update Updated device-mapper-multipath packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools to manage multipath devices using the device-mapper multipath kernel module. Bug Fixes BZ# 812832 The multipathd daemon was not correctly stopping waiter threads during shutdown. The waiter threads could access freed memory and cause the daemon to terminate unexpectedly during shutdown. With this update, the mutlipathd daemon now correctly stops the waiter threads before they can access any freed memory and no longer crashes during shutdown. BZ# 662433 When Device Mapper Multipath was stopped, multipathd did not disable the queue_if_no_path option on multipath devices by default. When multipathd was stopped during shutdown, I/O of the device was added to the queue if all paths to a device were lost, and the shutdown process became unresponsive. With this update, multipathd now sets the queue_without_daemon option to no by default. As a result, all multipath devices stop queueing when multipathd is stopped and multipath now shuts down as expected. BZ# 752989 Device Mapper Multipath uses regular expressions in built-in device configurations to determine a multipath device so as to apply the correct configuration to the device. Previously, some regular expressions for resolving the device vendor name and product ID were not specific enough. As a consequence, some devices could be matched with incorrect device configurations. With this update, the product and vendor regular expressions have been modified so that all multipath devices are now configured properly. BZ# 754586 After renaming a device, there was a race condition between multipathd and udev to rename the new multipath device nodes. If udev renamed the device node first, multipathd removed the device created by udev and failed to create the new device node. With this update, multipathd immediately creates the new device nodes, and the race condition no longer occurs. As a result, the renamed device is now available as expected. BZ# 769527 Previously, the flush_on_last_dev handling code did not implement handling of the queue feature properly. Consequently, even though the flush_on_last_del feature was activated, multipathd re-enabled queueing on multipath devices that could not be removed immediately after the last path device was deleted. With this update, the code has been fixed and when the user sets flush_on_last_del , their multipath devices correctly disable queueing, even if the devices cannot be closed immediately. BZ# 796384 Previously, Device Mapper Multipath used a fixed-size buffer to read the Virtual Device Identification page [0x83]. The buffer size was sometimes insufficient to accommodate the data sent by devices and the ALUA (Asymmetric Logical Unit Access) prioritizer failed. Device Mapper Multipath now dynamically allocates a buffer large enough for the Virtual Device Identification page and the ALUA prioritizer no longer fails in the scenario described. BZ# 744210 Previously, multipathd did not set the max_fds option by default, which sets the maximum number of file descriptors that multipathd can open. Also, the user_friendly_names setting could only be configured in the defaults section of /etc/multipath.conf . The user had to set max_fds manually and override the default user_friendly_names value in their device-specific configurations. With this update, multipath now sets max_fds to the system maximum by default, and user_friendly_names can be configured in the devices section of multipath.conf . Users no longer need to set max_fds for large setups, and they are able to select user_friendly_names per device type. BZ# 744756 Previously, to modify a built-in configuration, the vendor and product strings of the user's configuration had to be identical to the vendor and product strings of the built-in configuration. The vendor and product strings are regular expressions, and the user did not always know the correct vendor and product strings needed to modify a built-in configuration. With this update, the hwtable_regex_match option was added to the defaults section of multipath.conf . If it is set to yes , Multipath uses regular-expression matching to determine if the user's vendor and product strings match the built-in device configuration strings: the user can use the actual vendor and product information from their hardware in their device configuration, and it will modify the default configuration for that device. The option is set to no by default. BZ# 750132 Previously, multipathd was using a deprecated Out-of-Memory (OOM) adjustment interface. Consequently, the daemon was not protected from the OOM killer properly; the OOM killer could kill the daemon when memory was low and the user was unable to restore failed paths. With this update, multipathd now uses the new Out-of-Memory adjustment interface and can no longer be killed by the Out-of-Memory killer. BZ# 702222 The multipath.conf file now contains a comment which informs the user that the configuration must be reloaded for any changes to take effect. BZ# 751938 The multipathd daemon incorrectly exited with code 1 when multipath -h (print usage) was run. With this update, the underlying code has been modified and multipathd now returns code 0 as expected in the scenario described. BZ# 751039 Some multipathd threads did not check if multipathd was shutting down before they started their execution. Consequently, the multipathd daemon could terminate unexpectedly with a segmentation fault on shutdown. With this update, the multipathd threads now check if multipathd is shutting down before triggering their execution, and multipathd no longer terminates with a segmentation fault on shutdown. BZ# 467709 The multipathd daemon did not have a failover method to handle switching of path groups when multiple nodes were using the same storage. Consequently, if one node lost access to the preferred paths to a logical unit, while the preferred path of the other node was preserved, multipathd could end up switching back and forth between path groups. This update adds the followover failback method to device-mapper-multipath. If the followover failback method is set, multipathd does not fail back to the preferred path group, unless it just came back online. When multiple nodes are using the same storage, a path failing on one machine now no longer causes the path groups to continually switch back and forth. Enhancements BZ# 737051 The NetApp brand name has been added to the documentation about the RDAC (Redundant Disk Array Controller) checker and prioritizer. BZ# 788963 The built-in device configuration for Fujitsu ETERNUS has been added. BZ# 760852 If the multipath checker configuration was set to tur , the checks were not performed asynchronously. If a device failed and the checker was waiting for the SCSI layer to fail back, the checks on other paths were kept waiting. The checker has been rewritten so as to check the paths asynchronously, and the path checking on other paths continues as expected. BZ# 799908 A built-in configuration for IBM XIV Storage System has been added. BZ# 799842 The NetApp LUN built-in configuration now uses the tur path checker by default. Also flush_on_last_del has been enabled, dev_loss_tmo has been set to infinity , fast_io_fail_tmo has been set to 5 , and pg_init_retries has been set to 50 . Users of device-mapper-multipath should upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/device-mapper-multipath |
Chapter 22. API reference | Chapter 22. API reference 22.1. 5.6 Logging API reference 22.1.1. Logging 5.6 API reference 22.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 22.1.1.1.1. .spec 22.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 22.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 22.1.1.1.2. .spec.inputs[] 22.1.1.1.2.1. Description InputSpec defines a selector of log messages. 22.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 22.1.1.1.3. .spec.inputs[].application 22.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 22.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 22.1.1.1.4. .spec.inputs[].application.namespaces[] 22.1.1.1.4.1. Description 22.1.1.1.4.1.1. Type array 22.1.1.1.5. .spec.inputs[].application.selector 22.1.1.1.5.1. Description A label selector is a label query over a set of resources. 22.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 22.1.1.1.6. .spec.inputs[].application.selector.matchLabels 22.1.1.1.6.1. Description 22.1.1.1.6.1.1. Type object 22.1.1.1.7. .spec.outputDefaults 22.1.1.1.7.1. Description 22.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 22.1.1.1.8. .spec.outputDefaults.elasticsearch 22.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 22.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 22.1.1.1.9. .spec.outputs[] 22.1.1.1.9.1. Description Output defines a destination for log messages. 22.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 22.1.1.1.10. .spec.outputs[].secret 22.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 22.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 22.1.1.1.11. .spec.outputs[].tls 22.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 22.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 22.1.1.1.12. .spec.pipelines[] 22.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 22.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 22.1.1.1.13. .spec.pipelines[].inputRefs[] 22.1.1.1.13.1. Description 22.1.1.1.13.1.1. Type array 22.1.1.1.14. .spec.pipelines[].labels 22.1.1.1.14.1. Description 22.1.1.1.14.1.1. Type object 22.1.1.1.15. .spec.pipelines[].outputRefs[] 22.1.1.1.15.1. Description 22.1.1.1.15.1.1. Type array 22.1.1.1.16. .status 22.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 22.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 22.1.1.1.17. .status.conditions 22.1.1.1.17.1. Description 22.1.1.1.17.1.1. Type object 22.1.1.1.18. .status.inputs 22.1.1.1.18.1. Description 22.1.1.1.18.1.1. Type Conditions 22.1.1.1.19. .status.outputs 22.1.1.1.19.1. Description 22.1.1.1.19.1.1. Type Conditions 22.1.1.1.20. .status.pipelines 22.1.1.1.20.1. Description 22.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 22.1.1.1.21. .spec 22.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 22.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 22.1.1.1.22. .spec.collection 22.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 22.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 22.1.1.1.23. .spec.collection.fluentd 22.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 22.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 22.1.1.1.24. .spec.collection.fluentd.buffer 22.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 22.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount represents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 22.1.1.1.25. .spec.collection.fluentd.inFile 22.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 22.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 22.1.1.1.26. .spec.collection.logs 22.1.1.1.26.1. Description 22.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 22.1.1.1.27. .spec.collection.logs.fluentd 22.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 22.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 22.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 22.1.1.1.28.1. Description 22.1.1.1.28.1.1. Type object 22.1.1.1.29. .spec.collection.logs.fluentd.resources 22.1.1.1.29.1. Description 22.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 22.1.1.1.30.1. Description 22.1.1.1.30.1.1. Type object 22.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 22.1.1.1.31.1. Description 22.1.1.1.31.1.1. Type object 22.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 22.1.1.1.32.1. Description 22.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 22.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 22.1.1.1.33.1. Description 22.1.1.1.33.1.1. Type int 22.1.1.1.34. .spec.curation 22.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 22.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 22.1.1.1.35. .spec.curation.curator 22.1.1.1.35.1. Description 22.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 22.1.1.1.36. .spec.curation.curator.nodeSelector 22.1.1.1.36.1. Description 22.1.1.1.36.1.1. Type object 22.1.1.1.37. .spec.curation.curator.resources 22.1.1.1.37.1. Description 22.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.38. .spec.curation.curator.resources.limits 22.1.1.1.38.1. Description 22.1.1.1.38.1.1. Type object 22.1.1.1.39. .spec.curation.curator.resources.requests 22.1.1.1.39.1. Description 22.1.1.1.39.1.1. Type object 22.1.1.1.40. .spec.curation.curator.tolerations[] 22.1.1.1.40.1. Description 22.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 22.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 22.1.1.1.41.1. Description 22.1.1.1.41.1.1. Type int 22.1.1.1.42. .spec.forwarder 22.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 22.1.1.1.42.1.1. Type object Property Type Description fluentd object 22.1.1.1.43. .spec.forwarder.fluentd 22.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 22.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 22.1.1.1.44. .spec.forwarder.fluentd.buffer 22.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 22.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 22.1.1.1.45. .spec.forwarder.fluentd.inFile 22.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 22.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 22.1.1.1.46. .spec.logStore 22.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 22.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 22.1.1.1.47. .spec.logStore.elasticsearch 22.1.1.1.47.1. Description 22.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 22.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 22.1.1.1.48.1. Description 22.1.1.1.48.1.1. Type object 22.1.1.1.49. .spec.logStore.elasticsearch.proxy 22.1.1.1.49.1. Description 22.1.1.1.49.1.1. Type object Property Type Description resources object 22.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 22.1.1.1.50.1. Description 22.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 22.1.1.1.51.1. Description 22.1.1.1.51.1.1. Type object 22.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 22.1.1.1.52.1. Description 22.1.1.1.52.1.1. Type object 22.1.1.1.53. .spec.logStore.elasticsearch.resources 22.1.1.1.53.1. Description 22.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 22.1.1.1.54.1. Description 22.1.1.1.54.1.1. Type object 22.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 22.1.1.1.55.1. Description 22.1.1.1.55.1.1. Type object 22.1.1.1.56. .spec.logStore.elasticsearch.storage 22.1.1.1.56.1. Description 22.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 22.1.1.1.57. .spec.logStore.elasticsearch.storage.size 22.1.1.1.57.1. Description 22.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 22.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 22.1.1.1.58.1. Description 22.1.1.1.58.1.1. Type object Property Type Description Dec object 22.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 22.1.1.1.59.1. Description 22.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 22.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 22.1.1.1.60.1. Description 22.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 22.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 22.1.1.1.61.1. Description 22.1.1.1.61.1.1. Type Word 22.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 22.1.1.1.62.1. Description 22.1.1.1.62.1.1. Type int Property Type Description scale int value int 22.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 22.1.1.1.63.1. Description 22.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 22.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 22.1.1.1.64.1. Description 22.1.1.1.64.1.1. Type int 22.1.1.1.65. .spec.logStore.lokistack 22.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 22.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 22.1.1.1.66. .spec.logStore.retentionPolicy 22.1.1.1.66.1. Description 22.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 22.1.1.1.67. .spec.logStore.retentionPolicy.application 22.1.1.1.67.1. Description 22.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 22.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 22.1.1.1.68.1. Description 22.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 22.1.1.1.69. .spec.logStore.retentionPolicy.audit 22.1.1.1.69.1. Description 22.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 22.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 22.1.1.1.70.1. Description 22.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 22.1.1.1.71. .spec.logStore.retentionPolicy.infra 22.1.1.1.71.1. Description 22.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 22.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 22.1.1.1.72.1. Description 22.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 22.1.1.1.73. .spec.visualization 22.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 22.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 22.1.1.1.74. .spec.visualization.kibana 22.1.1.1.74.1. Description 22.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 22.1.1.1.75. .spec.visualization.kibana.nodeSelector 22.1.1.1.75.1. Description 22.1.1.1.75.1.1. Type object 22.1.1.1.76. .spec.visualization.kibana.proxy 22.1.1.1.76.1. Description 22.1.1.1.76.1.1. Type object Property Type Description resources object 22.1.1.1.77. .spec.visualization.kibana.proxy.resources 22.1.1.1.77.1. Description 22.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 22.1.1.1.78.1. Description 22.1.1.1.78.1.1. Type object 22.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 22.1.1.1.79.1. Description 22.1.1.1.79.1.1. Type object 22.1.1.1.80. .spec.visualization.kibana.replicas 22.1.1.1.80.1. Description 22.1.1.1.80.1.1. Type int 22.1.1.1.81. .spec.visualization.kibana.resources 22.1.1.1.81.1. Description 22.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 22.1.1.1.82. .spec.visualization.kibana.resources.limits 22.1.1.1.82.1. Description 22.1.1.1.82.1.1. Type object 22.1.1.1.83. .spec.visualization.kibana.resources.requests 22.1.1.1.83.1. Description 22.1.1.1.83.1.1. Type object 22.1.1.1.84. .spec.visualization.kibana.tolerations[] 22.1.1.1.84.1. Description 22.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 22.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 22.1.1.1.85.1. Description 22.1.1.1.85.1.1. Type int 22.1.1.1.86. .status 22.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 22.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 22.1.1.1.87. .status.collection 22.1.1.1.87.1. Description 22.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 22.1.1.1.88. .status.collection.logs 22.1.1.1.88.1. Description 22.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 22.1.1.1.89. .status.collection.logs.fluentdStatus 22.1.1.1.89.1. Description 22.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 22.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 22.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 22.1.1.1.90.1.1. Type object 22.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 22.1.1.1.91.1. Description 22.1.1.1.91.1.1. Type object 22.1.1.1.92. .status.conditions 22.1.1.1.92.1. Description 22.1.1.1.92.1.1. Type object 22.1.1.1.93. .status.curation 22.1.1.1.93.1. Description 22.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 22.1.1.1.94. .status.curation.curatorStatus[] 22.1.1.1.94.1. Description 22.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 22.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 22.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 22.1.1.1.95.1.1. Type object 22.1.1.1.96. .status.logStore 22.1.1.1.96.1. Description 22.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 22.1.1.1.97. .status.logStore.elasticsearchStatus[] 22.1.1.1.97.1. Description 22.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 22.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 22.1.1.1.98.1. Description 22.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 22.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 22.1.1.1.99.1. Description 22.1.1.1.99.1.1. Type object 22.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 22.1.1.1.100.1. Description 22.1.1.1.100.1.1. Type array 22.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 22.1.1.1.101.1. Description 22.1.1.1.101.1.1. Type object 22.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 22.1.1.1.102.1. Description 22.1.1.1.102.1.1. Type object 22.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 22.1.1.1.103.1. Description 22.1.1.1.103.1.1. Type array 22.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 22.1.1.1.104.1. Description 22.1.1.1.104.1.1. Type array 22.1.1.1.105. .status.visualization 22.1.1.1.105.1. Description 22.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 22.1.1.1.106. .status.visualization.kibanaStatus[] 22.1.1.1.106.1. Description 22.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 22.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 22.1.1.1.107.1. Description 22.1.1.1.107.1.1. Type object 22.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 22.1.1.1.108.1. Description 22.1.1.1.108.1.1. Type array | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/api-reference |
26.6. Installing Third-Party Certificates for HTTP or LDAP | 26.6. Installing Third-Party Certificates for HTTP or LDAP Installing a new SSL server certificate for the Apache Web Server, the Directory Server, or both replaces the current SSL certificate with a new one. To do this, you need: your private SSL key ( ssl.key in the procedure below) your SSL certificate ( ssl.crt in the procedure below) For a list of accepted formats of the key and certificate, see the ipa-server-certinstall (1) man page. Prerequisites The ssl.crt certificate must be signed by a CA known by the service you are loading the certificate into. If this is not the case, install the CA certificate of the CA that signed ssl.crt into IdM, as described in Section 26.3, "Installing a CA Certificate Manually" . This ensures that IdM recognizes the CA, and thus accepts ssl.crt . Installing the Third-Party Certificate Use the ipa-server-certinstall utility to install the certificate. Specify where you want to install it: --http installs the certificate in the Apache Web Server --dirsrv installs the certificate on the Directory Server For example, to install the SSL certificate into both: Restart the server into which you installed the certificate. To restart the Apache Web Server: To restart the Directory Server: To verify that the certificate has been correctly installed, make sure it is present in the certificate database. To display the Apache certificate database: To display the Directory Server certificate database: | [
"ipa-server-certinstall --http --dirsrv ssl.key ssl.crt",
"systemctl restart httpd.service",
"systemctl restart dirsrv@ REALM .service",
"certutil -L -d /etc/httpd/alias",
"certutil -L -d /etc/dirsrv/slapd- REALM /"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/third-party-certs-http-ldap |
Chapter 8. Cruise Control for cluster rebalancing | Chapter 8. Cruise Control for cluster rebalancing You can deploy Cruise Control to your AMQ Streams cluster and use it to rebalance the Kafka cluster. Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. AMQ Streams utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from multiple optimization goals . Rebalancing a Kafka cluster based on an optimization proposal. Other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor. Example YAML files for Cruise Control are provided in examples/cruise-control/ . 8.1. Why use Cruise Control? Cruise Control reduces the time and effort involved in running an efficient and balanced Kafka cluster. A typical cluster can become unevenly loaded over time. Partitions that handle large amounts of message traffic might be unevenly distributed across the available brokers. To rebalance the cluster, administrators must monitor the load on brokers and manually reassign busy partitions to brokers with spare capacity. Cruise Control automates the cluster rebalancing process. It constructs a workload model of resource utilization for the cluster- based on CPU, disk, and network load- and generates optimization proposals (that you can approve or reject) for more balanced partition assignments. A set of configurable optimization goals is used to calculate these proposals. When you approve an optimization proposal, Cruise Control applies it to your Kafka cluster. When the cluster rebalancing operation is complete, the broker pods are used more effectively and the Kafka cluster is more evenly balanced. Additional resources Cruise Control Wiki 8.2. Optimization goals overview To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals , which you can approve or reject. Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. AMQ Streams supports most of the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Replica capacity Capacity : Disk capacity, Network inbound capacity, Network outbound capacity, CPU capacity Replica distribution Potential network output Resource distribution : Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution, CPU utilization distribution Note The resource distribution goals are controlled using capacity limits on broker resources. Leader bytes-in rate distribution Topic replica distribution Leader replica distribution Preferred leader election For more information on each optimization goal, see Goals in the Cruise Control Wiki. Note Intra-broker disk goals, "Write your own" goals, and Kafka assigner goals are not yet supported. Goals configuration in AMQ Streams custom resources You configure optimization goals in Kafka and KafkaRebalance custom resources. Cruise Control has configurations for hard optimization goals that must be satisfied, as well as master , default , and user-provided optimization goals. Optimization goals for resource distribution (disk, network inbound, network outbound, and CPU) are subject to capacity limits on broker resources. The following sections describe each goal configuration in more detail. Hard goals and soft goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. An optimization proposal that violates one or more soft goals, but satisfies all hard goals, is valid. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by Cruise Control and not sent to the user for approval. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. In Cruise Control, the following master optimization goals are preset as hard goals: You configure hard goals in the Cruise Control deployment configuration, by editing the hard.goals property in Kafka.spec.cruiseControl.config . To inherit the preset hard goals from Cruise Control, do not specify the hard.goals property in Kafka.spec.cruiseControl.config To change the preset hard goals, specify the desired goals in the hard.goals property, using their fully-qualified domain names. Example Kafka configuration for hard optimization goals apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal # ... Increasing the number of configured hard goals will reduce the likelihood of Cruise Control generating valid optimization proposals. If skipHardGoalCheck: true is specified in the KafkaRebalance custom resource, Cruise Control does not check that the list of user-provided optimization goals (in KafkaRebalance.spec.goals ) contains all the configured hard goals ( hard.goals ). Therefore, if some, but not all, of the user-provided optimization goals are in the hard.goals list, Cruise Control will still treat them as hard goals even if skipHardGoalCheck: true is specified. Master optimization goals The master optimization goals are available to all users. Goals that are not listed in the master optimization goals are not available for use in Cruise Control operations. Unless you change the Cruise Control deployment configuration , AMQ Streams will inherit the following master optimization goals from Cruise Control, in descending priority order: Six of these goals are preset as hard goals . To reduce complexity, we recommend that you use the inherited master optimization goals, unless you need to completely exclude one or more goals from use in KafkaRebalance resources. The priority order of the master optimization goals can be modified, if desired, in the configuration for default optimization goals . You configure master optimization goals, if necessary, in the Cruise Control deployment configuration: Kafka.spec.cruiseControl.config.goals To accept the inherited master optimization goals, do not specify the goals property in Kafka.spec.cruiseControl.config . If you need to modify the inherited master optimization goals, specify a list of goals, in descending priority order, in the goals configuration option. Note If you change the inherited master optimization goals, you must ensure that the hard goals, if configured in the hard.goals property in Kafka.spec.cruiseControl.config , are a subset of the master optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. Default optimization goals Cruise Control uses the default optimization goals to generate the cached optimization proposal . For more information about the cached optimization proposal, see Section 8.3, "Optimization proposals overview" . You can override the default optimization goals by setting user-provided optimization goals in a KafkaRebalance custom resource. Unless you specify default.goals in the Cruise Control deployment configuration , the master optimization goals are used as the default optimization goals. In this case, the cached optimization proposal is generated using the master optimization goals. To use the master optimization goals as the default goals, do not specify the default.goals property in Kafka.spec.cruiseControl.config . To modify the default optimization goals, edit the default.goals property in Kafka.spec.cruiseControl.config . You must use a subset of the master optimization goals. Example Kafka configuration for default optimization goals apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # ... If no default optimization goals are specified, the cached proposal is generated using the master optimization goals. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, in spec.goals in a KafkaRebalance custom resource: User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you create a KafkaRebalance custom resource containing a single user-provided goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the master optimization goals To ignore the configured hard goals when generating an optimization proposal, add the skipHardGoalCheck: true property to the KafkaRebalance custom resource. See Section 8.7, "Generating optimization proposals" . Additional resources Section 8.5, "Cruise Control configuration" Configurations in the Cruise Control Wiki. 8.3. Optimization proposals overview An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources . An optimization proposal is contained in the Status.Optimization Result property of a KafkaRebalance custom resource. The information provided is a summary of the full optimization proposal. Use the summary to decide whether to: Approve the optimization proposal. This instructs Cruise Control to apply the proposal to the Kafka cluster and start a cluster rebalance operation. Reject the optimization proposal. You can change the optimization goals and then generate another proposal. All optimization proposals are dry runs : you cannot approve a cluster rebalance without first generating an optimization proposal. There is no limit to the number of optimization proposals that can be generated. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals. Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. If you generate an optimization proposal using the default optimization goals, Cruise Control returns the most recent cached proposal. To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the Cruise Control deployment configuration. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Contents of optimization proposals The following table describes the properties contained in an optimization proposal: Table 8.1. Properties contained in an optimization proposal JSON property Description numIntraBrokerReplicaMovements The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but lower than numReplicaMovements . excludedBrokersForLeadership Not yet supported. An empty list is returned. numReplicaMovements The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. onDemandBalancednessScoreBefore, onDemandBalancednessScoreAfter A measurement of the overall balancedness of a Kafka Cluster, before and after the optimization proposal was generated. The score is calculated by subtracting the sum of the BalancednessScore of each violated soft goal from 100. Cruise Control assigns a BalancednessScore to every optimization goal based on several factors, including priority- the goal's position in the list of default.goals or user-provided goals. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. intraBrokerDataToMoveMB The sum of the size of each partition replica that will be moved between disks on the same broker (see also numIntraBrokerReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see dataToMoveMB ). recentWindows The number of metrics windows upon which the optimization proposal is based. dataToMoveMB The sum of the size of each partition replica that will be moved to a separate broker (see also numReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. monitoredPartitionsPercentage The percentage of partitions in the Kafka cluster covered by the optimization proposal. Affected by the number of excludedTopics . excludedTopics If you specified a regular expression in the spec.excludedTopicsRegex property in the KafkaRebalance resource, all topic names matching that expression are listed here. These topics are excluded from the calculation of partition replica/leader movements in the optimization proposal. numLeaderMovements The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. excludedBrokersForReplicaMove Not yet supported. An empty list is returned. Additional resources Section 8.2, "Optimization goals overview" Section 8.7, "Generating optimization proposals" Section 8.8, "Approving an optimization proposal" 8.4. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. Partition reassignment commands Optimization proposals are comprised of separate partition reassignment commands. When you approve a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement: Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement: This involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which simply applies the commands in the order they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides three alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in order of ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in order of descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. Rebalance tuning options Cruise Control provides several configuration options for tuning the rebalance parameters discussed above. You can set these tuning options at either the Cruise Control server or optimization proposal levels: The Cruise Control server setting can be set in the Kafka custom resource under Kafka.spec.cruiseControl.config . The individual rebalance performance configurations can be set under KafkaRebalance.spec . The relevant configurations are summarized below: Server and KafkaRebalance Configuration Description Default Value num.concurrent.partition.movements.per.broker The maximum number of inter-broker partition movements in each partition reassignment batch 5 concurrentPartitionMovementsPerBroker num.concurrent.intra.broker.partition.movements The maximum number of intra-broker partition movements in each partition reassignment batch 2 concurrentIntraBrokerPartitionMovements num.concurrent.leader.movements The maximum number of partition leadership changes in each partition reassignment batch 1000 concurrentLeaderMovements default.replication.throttle The bandwidth (in bytes per second) to be assigned to the reassigning of partitions No Limit replicationThrottle default.replica.movement.strategies The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. For the server setting, use a comma separated string with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the KafkaRebalance resource setting use a YAML array of strategy class names. BaseReplicaMovementStrategy replicaMovementStrategies Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Section B.70, " CruiseControlSpec schema reference" . Section B.141, " KafkaRebalanceSpec schema reference" . 8.5. Cruise Control configuration The config property in Kafka.spec.cruiseControl contains configuration options as keys with values as one of the following JSON types: String Number Boolean Note Strings that look like JSON or YAML will need to be explicitly quoted. You can specify and configure all the options listed in the "Configurations" section of the Cruise Control documentation , apart from those managed directly by AMQ Streams. Specifically, you cannot modify configuration options with keys equal to or starting with one of the keys mentioned here . If restricted options are specified, they are ignored and a warning message is printed to the Cluster Operator log file. All the supported options are passed to Cruise Control. An example Cruise Control configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # ... Capacity configuration Cruise Control uses capacity limits to determine if optimization goals for resource distribution are being broken. There are four goals of this type: DiskUsageDistributionGoal - Disk utilization distribution CpuUsageDistributionGoal - CPU utilization distribution NetworkInboundUsageDistributionGoal - Network inbound utilization distribution NetworkOutboundUsageDistributionGoal - Network outbound utilization distribution You specify capacity limits for Kafka broker resources in the brokerCapacity property in Kafka.spec.cruiseControl . They are enabled by default and you can change their default values. Capacity limits can be set for the following broker resources, using the standard OpenShift byte units (K, M, G and T) or their bibyte (power of two) equivalents (Ki, Mi, Gi and Ti): disk - Disk storage per broker (Default: 100000Mi) cpuUtilization - CPU utilization as a percentage (Default: 100) inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) Because AMQ Streams Kafka brokers are homogeneous, Cruise Control applies the same capacity limits to every broker it is monitoring. An example Cruise Control brokerCapacity configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: disk: 100Gi cpuUtilization: 100 inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s # ... Additional resources For more information, refer to the Section B.72, " BrokerCapacity schema reference" . Logging configuration Cruise Control has its own configurable logger: cruisecontrol.root.logger Cruise Control uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka # ... spec: cruiseControl: # ... logging: type: inline loggers: cruisecontrol.root.logger: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka # ... spec: cruiseControl: # ... logging: type: external name: customConfigMap # ... 8.6. Deploying Cruise Control To deploy Cruise Control to your AMQ Streams cluster, define the configuration using the cruiseControl property in the Kafka resource, and then create or update the resource. Deploy one instance of Cruise Control per Kafka cluster. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the Kafka resource and add the cruiseControl property. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s # ... config: 2 default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal # ... cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # ... resources: 3 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: 4 type: inline loggers: cruisecontrol.root.logger: "INFO" template: 5 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 6 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 7 initialDelaySeconds: 15 timeoutSeconds: 5 # ... 1 Specifies capacity limits for broker resources. For more information, see Capacity configuration . 2 Defines the Cruise Control configuration, including the default optimization goals (in default.goals ) and any customizations to the master optimization goals (in goals ) or the hard goals (in hard.goals ). You can provide any standard Cruise Control configuration option apart from those managed directly by AMQ Streams. For more information on configuring optimization goals, see Section 8.2, "Optimization goals overview" . 3 CPU and memory resources reserved for Cruise Control. For more information, see Section 2.1.11, "CPU and memory resources" . 4 Defined loggers and log levels added directly (inline) or indirectly (external) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties key. Cruise Control has a single logger named cruisecontrol.root.logger . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. For more information, see Logging configuration . 5 Customization of deployment templates and pods . 6 Healthcheck readiness probes . 7 Healthcheck liveness probes . Create or update the resource: oc apply -f kafka.yaml Verify that Cruise Control was successfully deployed: oc get deployments -l app.kubernetes.io/name=strimzi Auto-created topics The following table shows the three topics that are automatically created when Cruise Control is deployed. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 8.2. Auto-created topics Auto-created topic Created by Function strimzi.cruisecontrol.metrics AMQ Streams Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. strimzi.cruisecontrol.partitionmetricsamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . strimzi.cruisecontrol.modeltrainingsamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To prevent the removal of records that are needed by Cruise Control, log compaction is disabled in the auto-created topics. What to do After configuring and deploying Cruise Control, you can generate optimization proposals . Additional resources Section B.71, " CruiseControlTemplate schema reference" . 8.7. Generating optimization proposals When you create or update a KafkaRebalance resource, Cruise Control generates an optimization proposal for the Kafka cluster based on the configured optimization goals . Analyze the information in the optimization proposal and decide whether to approve it. Prerequisites You have deployed Cruise Control to your AMQ Streams cluster. You have configured optimization goals and, optionally, capacity limits on broker resources . Procedure Create a KafkaRebalance resource: To use the default optimization goals defined in the Kafka resource, leave the spec property empty: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {} To configure user-provided optimization goals instead of using the default goals, add the goals property and enter one or more goals. In the following example, rack awareness and replica capacity are configured as user-provided optimization goals: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal To ignore the configured hard goals, add the skipHardGoalCheck: true property: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true Create or update the resource: oc apply -f your-file The Cluster Operator requests the optimization proposal from Cruise Control. This might take a few minutes depending on the size of the Kafka cluster. Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Cruise Control returns one of two statuses: PendingProposal : The rebalance operator is polling the Cruise Control API to check if the optimization proposal is ready. ProposalReady : The optimization proposal is ready for review and, if desired, approval. The optimization proposal is contained in the Status.Optimization Result property of the KafkaRebalance resource. Review the optimization proposal. oc describe kafkarebalance rebalance-cr-name Here is an example proposal: Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c The properties in the Optimization Result section describe the pending cluster rebalance operation. For descriptions of each property, see Contents of optimization proposals . What to do Section 8.8, "Approving an optimization proposal" Additional resources Section 8.3, "Optimization proposals overview" 8.8. Approving an optimization proposal You can approve an optimization proposal generated by Cruise Control, if its status is ProposalReady . Cruise Control will then apply the optimization proposal to the Kafka cluster, reassigning partitions to brokers and changing partition leadership. Caution This is not a dry run. Before you approve an optimization proposal, you must: Refresh the proposal in case it has become out of date. Carefully review the contents of the proposal . Prerequisites You have generated an optimization proposal from Cruise Control. The KafkaRebalance custom resource status is ProposalReady . Procedure Perform these steps for the optimization proposal that you want to approve: Unless the optimization proposal is newly generated, check that it is based on current information about the state of the Kafka cluster. To do so, refresh the optimization proposal to make sure it uses the latest cluster metrics: Annotate the KafkaRebalance resource in OpenShift with refresh : oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to ProposalReady . Approve the optimization proposal that you want Cruise Control to apply. Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=approve The Cluster Operator detects the annotated resource and instructs Cruise Control to rebalance the Kafka cluster. Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Cruise Control returns one of three statuses: Rebalancing: The cluster rebalance operation is in progress. Ready: The cluster rebalancing operation completed successfully. The KafkaRebalance custom resource cannot be reused. NotReady: An error occurred- see Section 8.10, "Fixing problems with a KafkaRebalance resource" . Additional resources Section 8.3, "Optimization proposals overview" Section 8.9, "Stopping a cluster rebalance" 8.9. Stopping a cluster rebalance Once started, a cluster rebalance operation might take some time to complete and affect the overall performance of the Kafka cluster. If you want to stop a cluster rebalance operation that is in progress, apply the stop annotation to the KafkaRebalance custom resource. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to prior to the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites You have approved the optimization proposal by annotating the KafkaRebalance custom resource with approve . The status of the KafkaRebalance custom resource is Rebalancing . Procedure Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to Stopped . Additional resources Section 8.3, "Optimization proposals overview" 8.10. Fixing problems with a KafkaRebalance resource If an issue occurs when creating a KafkaRebalance resource or interacting with Cruise Control, the error is reported in the resource status, along with details of how to fix it. The resource also moves to the NotReady state. To continue with the cluster rebalance operation, you must fix the problem in the KafkaRebalance resource itself or with the overall Cruise Control deployment. Problems might include the following: A misconfigured parameter in the KafkaRebalance resource. The strimzi.io/cluster label for specifying the Kafka cluster in the KafkaRebalance resource is missing. The Cruise Control server is not deployed as the cruiseControl property in the Kafka resource is missing. The Cruise Control server is not reachable. After fixing the issue, you need to add the refresh annotation to the KafkaRebalance resource. During a "refresh", a new optimization proposal is requested from the Cruise Control server. Prerequisites You have approved an optimization proposal . The status of the KafkaRebalance custom resource for the rebalance operation is NotReady . Procedure Get information about the error from the KafkaRebalance status: oc describe kafkarebalance rebalance-cr-name Attempt to resolve the issue in the KafkaRebalance resource. Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to PendingProposal , or directly to ProposalReady . Additional resources Section 8.3, "Optimization proposals overview" | [
"RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #",
"RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal #",
"KafkaRebalance.spec.goals",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: disk: 100Gi cpuUtilization: 100 inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: cruiseControl: # logging: type: inline loggers: cruisecontrol.root.logger: \"INFO\" #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: cruiseControl: # logging: type: external name: customConfigMap #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s # config: 2 default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 # resources: 3 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: 4 type: inline loggers: cruisecontrol.root.logger: \"INFO\" template: 5 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 6 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 7 initialDelaySeconds: 15 timeoutSeconds: 5",
"apply -f kafka.yaml",
"get deployments -l app.kubernetes.io/name=strimzi",
"apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}",
"apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal",
"apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true",
"apply -f your-file",
"describe kafkarebalance rebalance-cr-name",
"describe kafkarebalance rebalance-cr-name",
"Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh",
"describe kafkarebalance rebalance-cr-name",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=approve",
"describe kafkarebalance rebalance-cr-name",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop",
"describe kafkarebalance rebalance-cr-name",
"describe kafkarebalance rebalance-cr-name",
"annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh",
"describe kafkarebalance rebalance-cr-name"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/cruise-control-concepts-str |
Chapter 1. Bare Metal Provisioning service (ironic) functionality | Chapter 1. Bare Metal Provisioning service (ironic) functionality You use the Bare Metal Provisioning service (ironic) components to provision and manage physical machines as bare metal instances for your cloud users. To provision and manage bare metal instances, the Bare Metal Provisioning service interacts with the following Red Hat OpenStack Platform (RHOSP) services in the overcloud: The Compute service (nova) provides scheduling, tenant quotas, and a user-facing API for virtual machine instance management. The Bare Metal Provisioning service provides the administrative API for hardware management. The Identity service (keystone) provides request authentication and assists the Bare Metal Provisioning service to locate other RHOSP services. The Image service (glance) manages disk and instance images and image metadata. The Networking service (neutron) provides DHCP and network configuration, and provisions the virtual or physical networks that instances connect to on boot. The Object Storage service (swift) exposes temporary image URLs for some drivers. Bare Metal Provisioning service components The Bare Metal Provisioning service consists of services, named ironic-* . The following services are the core Bare Metal Provisioning services: Bare Metal Provisioning API ( ironic-api ) This service provides the external REST API to users. The API sends application requests to the Bare Metal Provisioning conductor over remote procedure call (RPC). Bare Metal Provisioning conductor ( ironic-conductor ) This service uses drivers to perform the following bare metal node management tasks: Adds, edits, and deletes bare metal nodes. Powers bare metal nodes on and off with IPMI, Redfish, or other vendor-specific protocol. Provisions, deploys, and cleans bare metal nodes. Bare Metal Provisioning inspector ( ironic-inspector ) This service discovers the hardware properties of a bare metal node that are required for scheduling bare metal instances, and creates the Bare Metal Provisioning service ports for the discovered ethernet MACs. Bare Metal Provisioning database This database tracks hardware information and state. Message queue All services use this messaging service to communicate with each other, including implementing the RPC between ironic-api and ironic-conductor . Bare Metal Provisioning agent ( ironic-python-agent ) This service runs in a temporary ramdisk to provide ironic-conductor and ironic-inspector services with remote access, in-band hardware control, and hardware introspection. Provisioning a bare metal instance The Bare Metal Provisioning service uses iPXE to provision physical machines as bare metal instances. The following diagram outlines how the RHOSP services interact during the provisioning process when a cloud user launches a new bare metal instance with the default drivers. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/assembly_bare-metal-provisioning-service-functionality |
Chapter 8. Migrating Directory Server 11 to Directory Server 12 | Chapter 8. Migrating Directory Server 11 to Directory Server 12 Learn about migrating from Red Hat Directory Server 11 to 12, including tasks that you must perform before the migration begins. Important Red Hat supports migration only from Red Hat Directory Server 10 or 11 to version 12. To migrate Directory Server from earlier version, you must perform incremental migrations to Directory Server 10 or 11. Red Hat does not support an in-place upgrade of Directory Server 10 or 11 servers to version 12 by using the leapp upgrade tool. For migration, you can use one of the following ways: If you have a replication topology, use the replication method. If you have a disconnected topology without planned replication between Directory Server 10 and Directory Server 12, or if your database is more that 1 GB, use the export and import method. 8.1. Prerequisites The existing Directory Server installation runs on version 11 and has all available updates installed. You installed a Directory Server 12 host and created an instance on the host. 8.2. Migrating to Directory Server 12 using the replication method In a replication topology, use the replication method to migrate to Directory Server 12. Procedure On the Directory Server 12 host, enable replication, but do not create a replication agreement. For details about enabling replication, see the Configuring and managing replication documentation for Red Hat Directory Server 12 . On the Directory Server 11 host, enable replication and create a replication agreement that points to the Directory Server 12 host. For more information, see the Multi-Supplier Replication section in the Red Hat Directory Server 11 Administrator Guide . Important If you used a custom configuration on the Directory Server 11 host, do not replace the dse.ldif configuration file on the Directory Server 12 host with the file from the Directory Server 11 host, because the dse.ldif layout changes between versions. Instead, use the dsconf utility or the web console to add the custom configuration for each parameter and plug-in that you require. Optional: Set up further Directory Server 12 hosts with replication agreements between Directory Server 12 hosts. Configure your clients to use only Directory Server 12 hosts. On the Directory Server 11 host, remove the replication agreements that point to Directory Server 12 host. See Removing a Directory Server Instance from the Replication Topology in the Red Hat Directory Server 11 Administration Guide . Uninstall the Directory Server 11 hosts. See Uninstalling Directory Server in the Red Hat Directory Server 11 Installation Guide . 8.3. Migrating to Directory Server 12 using the export and import method Use the export and import method for migration in the following cases: You have instances without replication. Your database is more that 1 GB. Procedure Perform the following steps on the existing Directory Server 11 host: Stop and disable the dirsrv service: Export the backend. For example, to export the userRoot database and store it in the /var/lib/dirsrv/slapd- DS11_instance_name /migration.ldif file, run: Copy the following files to the new host where you want to install Directory Server 12: The /var/lib/dirsrv/slapd- DS11_instance_name /migration.ldif file that you exported in the step. The /etc/dirsrv/slapd- DS11_instance_name /dse.ldif configuration file. Important Do not replace the dse.ldif configuration file on the Directory Server 12 host with the file from the Directory Server 11 host because the dse.ldif layout changes different versions. Store the dse.ldif file for the reference. The /etc/dirsrv/slapd- DS11_instance_name /schema/99user.ldif file, if you use a custom schema. If you want to migrate an instance with TLS enabled and reuse the same host name for the Directory Server 12 installation, copy the following files to the new host: /etc/dirsrv/slapd- DS11_instance_named /cert9.db /etc/dirsrv/slapd- DS11_instance_name /key4.db /etc/dirsrv/slapd- DS11_instance_name /pin.txt If you want to use the same host name and IP on the Directory Server 12 host, disconnect the old server from the network. Perform the following steps on the new Directory Server 12 host: Optional: Configure TLS encryption: If the new installation uses a different host name than the Directory Server 11 instance, see the Enabling TLS-encrypted connections to Directory Server section in the Securing Red Hat Directory Server documentation. To use the same host name as the Directory Server 11 installation: Stop the instance: Remove the Network Security Services (NSS) databases and the password file for Directory Server, if they already exist: Place the cert9.db , key4.db , and pin.txt files that you copied from the Directory Server 11 host in the /etc/dirsrv/slapd- DS12_instance_name / directory: Set the correct permissions for the NSS databases and the password file: Start the instance: If you used a custom schema, place the 99user.ldif file into the /etc/dirsrv/slapd- DS12_instance_name /schema/ directory, set appropriate permissions, and restart the instance: Place the /var/lib/dirsrv/slapd- DS11_instance_name /migration.ldif file that you copied from the Directory Server 11 host in the /var/lib/dirsrv/slapd- DS12_instance_name /ldif/ directory. Import the migration.ldif file to restore the userRoot database with all entries: Note that Directory Server requires the LDIF file you want to import in the /var/lib/dirsrv/slapd- DS12_instance_name / directory. Important If you used a custom configuration on the Directory Server 11 host, do not replace the dse.ldif configuration file on the Directory Server 12 host with the file from the Directory Server 11 host. Instead, use the dsconf utility or the web console to add the custom configuration manually for each parameter and plug-in that you require. | [
"dsctl DS11_instance_name stop systemctl disable dirsrv@ DS11_instance_name",
"dsctl DS11_instance_name db2ldif userroot /var/lib/dirsrv/slapd- DS11_instance_name /migration.ldif",
"dsctl DS12_instance_name stop",
"rm /etc/dirsrv/slapd- DS12_instance_name /cert*.db /etc/dirsrv/slapd- DS12_instance_name /key*.db /etc/dirsrv/slapd- DS12_instance_name /pin.txt",
"chown dirsrv:root /etc/dirsrv/slapd- DS12_instance_name /cert9.db /etc/dirsrv/slapd- DS12_instance_name /key4.db /etc/dirsrv/slapd- DS12_instance_name /pin.txt chmod 600 /etc/dirsrv/slapd- DS12_instance_name /cert9.db /etc/dirsrv/slapd- DS12_instance_name /key4.db /etc/dirsrv/slapd- DS12_instance_name /pin.txt",
"dsctl DS12_instance_name start",
"cp /etc/dirsrv/slapd- DS11_instance_name /schema/99user.ldif /etc/dirsrv/slapd- DS12_instance_name /schema/ chmod 644 /etc/dirsrv/slapd- DS12_instance_name /schema/99user.ldif chown root:root /etc/dirsrv/slapd- DS12_instance_name /schema/99user.ldif dsctl DS12_instance_name restart",
"dsconf -D 'cn=Directory Manager' ldap:// server.example.com backend import userRoot /var/lib/dirsrv/slapd- DS12_instance_name /ldif/migration.ldif"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_migrating-directory-server-11-to-directory-server-12_installing-rhds |
Chapter 1. The Ceph architecture | Chapter 1. The Ceph architecture Red Hat Ceph Storage cluster is a distributed data object store designed to provide excellent performance, reliability and scalability. Distributed object stores are the future of storage, because they accommodate unstructured data, and because clients can use modern object interfaces and legacy interfaces simultaneously. For example: APIs in many languages (C/C++, Java, Python) RESTful interfaces (S3/Swift) Block device interface Filesystem interface The power of Red Hat Ceph Storage cluster can transform your organization's IT infrastructure and your ability to manage vast amounts of data, especially for cloud computing platforms like Red Hat Enterprise Linux OSP. Red Hat Ceph Storage cluster delivers extraordinary scalability-thousands of clients accessing petabytes to exabytes of data and beyond. At the heart of every Ceph deployment is the Red Hat Ceph Storage cluster. It consists of three types of daemons: Ceph OSD Daemon: Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs utilize the CPU, memory and networking of Ceph nodes to perform data replication, erasure coding, rebalancing, recovery, monitoring and reporting functions. Ceph Monitor: A Ceph Monitor maintains a master copy of the Red Hat Ceph Storage cluster map with the current state of the Red Hat Ceph Storage cluster. Monitors require high consistency, and use Paxos to ensure agreement about the state of the Red Hat Ceph Storage cluster. Ceph Manager: The Ceph Manager maintains detailed information about placement groups, process metadata and host metadata in lieu of the Ceph Monitor- significantly improving performance at scale. The Ceph Manager handles execution of many of the read-only Ceph CLI queries, such as placement group statistics. The Ceph Manager also provides the RESTful monitoring APIs. Ceph client interfaces read data from and write data to the Red Hat Ceph Storage cluster. Clients need the following data to communicate with the Red Hat Ceph Storage cluster: The Ceph configuration file, or the cluster name (usually ceph ) and the monitor address. The pool name. The user name and the path to the secret key. Ceph clients maintain object IDs and the pool names where they store the objects. However, they do not need to maintain an object-to-OSD index or communicate with a centralized object index to look up object locations. To store and retrieve data, Ceph clients access a Ceph Monitor and retrieve the latest copy of the Red Hat Ceph Storage cluster map. Then, Ceph clients provide an object name and pool name to librados , which computes an object's placement group and the primary OSD for storing and retrieving data using the CRUSH (Controlled Replication Under Scalable Hashing) algorithm. The Ceph client connects to the primary OSD where it may perform read and write operations. There is no intermediary server, broker or bus between the client and the OSD. When an OSD stores data, it receives data from a Ceph client- whether the client is a Ceph Block Device, a Ceph Object Gateway, a Ceph Filesystem or another interface- and it stores the data as an object. Note An object ID is unique across the entire cluster, not just an OSD's storage media. Ceph OSDs store all data as objects in a flat namespace. There are no hierarchies of directories. An object has a cluster-wide unique identifier, binary data, and metadata consisting of a set of name/value pairs. Ceph clients define the semantics for the client's data format. For example, the Ceph block device maps a block device image to a series of objects stored across the cluster. Note Objects consisting of a unique ID, data, and name/value paired metadata can represent both structured and unstructured data, as well as legacy and leading edge data storage interfaces. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/architecture_guide/the-ceph-architecture_arch |
4.2. Configuring Timeout Values for a Cluster | 4.2. Configuring Timeout Values for a Cluster When you create a cluster with the pcs cluster setup command, timeout values for the cluster are set to default values that should be suitable for most cluster configurations. If you system requires different timeout values, however, you can modify these values with the pcs cluster setup options summarized in Table 4.1, "Timeout Options" Table 4.1. Timeout Options Option Description --token timeout Sets time in milliseconds until a token loss is declared after not receiving a token (default 1000 ms) --join timeout sets time in milliseconds to wait for join messages (default 50 ms) --consensus timeout sets time in milliseconds to wait for consensus to be achieved before starting a new round of member- ship configuration (default 1200 ms) --miss_count_const count sets the maximum number of times on receipt of a token a message is checked for retransmission before a retransmission occurs (default 5 messages) --fail_recv_const failures specifies how many rotations of the token without receiving any messages when messages should be received may occur before a new configuration is formed (default 2500 failures) For example, the following command creates the cluster new_cluster and sets the token timeout value to 10000 milliseconds (10 seconds) and the join timeout value to 100 milliseconds. | [
"pcs cluster setup --name new_cluster nodeA nodeB --token 10000 --join 100"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-configtimeout-HAAR |
Chapter 10. Understanding and creating service accounts | Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/understanding-and-creating-service-accounts |
Chapter 4. Pools overview | Chapter 4. Pools overview Ceph clients store data in pools. When you create pools, you are creating an I/O interface for clients to store data. From the perspective of a Ceph client, that is, block device, gateway, and the rest, interacting with the Ceph storage cluster is remarkably simple: Create a cluster handle. Connect the cluster handle to the cluster. Create an I/O context for reading and writing objects and their extended attributes. Creating a cluster handle and connecting to the cluster To connect to the Ceph storage cluster, the Ceph client needs the following details: The cluster name (which Ceph by default) - not using usually because it sounds ambiguous. An initial monitor address. Ceph clients usually retrieve these parameters using the default path for the Ceph configuration file and then read it from the file, but a user might also specify the parameters on the command line too. The Ceph client also provides a user name and secret key, authentication is on by default. Then, the client contacts the Ceph monitor cluster and retrieves a recent copy of the cluster map, including its monitors, OSDs and pools. Creating a pool I/O context To read and write data, the Ceph client creates an I/O context to a specific pool in the Ceph storage cluster. If the specified user has permissions for the pool, the Ceph client can read from and write to the specified pool. Ceph's architecture enables the storage cluster to provide this remarkably simple interface to Ceph clients so that clients might select one of the sophisticated storage strategies you define simply by specifying a pool name and creating an I/O context. Storage strategies are invisible to the Ceph client in all but capacity and performance. Similarly, the complexities of Ceph clients, such as mapping objects into a block device representation or providing an S3/Swift RESTful service, are invisible to the Ceph storage cluster. A pool provides you with resilience, placement groups, CRUSH rules, and quotas. Resilience : You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies or replicas of an object. A typical configuration stores an object and one additional copy, that is, size = 2 , but you can determine the number of copies or replicas. For erasure coded pools, it is the number of coding chunks, that is m=2 in the erasure code profile . Placement Groups : You can set the number of placement groups for the pool. A typical configuration uses approximately 50-100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement groups for both the pool and the cluster as a whole. CRUSH Rules : When you store data in a pool, a CRUSH rule mapped to the pool enables CRUSH to identify the rule for the placement of each object and its replicas, or chunks for erasure coded pools, in your cluster. You can create a custom CRUSH rule for your pool. Quotas : When you set quotas on a pool with ceph osd pool set-quota command, you might limit the maximum number of objects or the maximum number of bytes stored in the specified pool. 4.1. Pools and storage strategies overview To manage pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool. 4.2. Listing pool List your cluster's pools: Example 4.3. Creating a pool Before creating pools, see the Configuration Guide for more details. It is better to adjust the default value for the number of placement groups, as the default value does not have to suit your needs: Example Create a replicated pool: Syntax Create an erasure-coded pool: Syntax Create a bulk pool: Syntax Where: POOL_NAME Description The name of the pool. It must be unique. Type String Required Yes. If not specified, it is set to the default value. Default ceph PG_NUM Description The total number of placement groups for the pool. See the Placement Groups section and the Ceph Placement Groups (PGs) per Pool Calculator for details on calculating a suitable number. The default value 8 is not suitable for most systems. Type Integer Required Yes Default 8 PGP_NUM Description The total number of placement groups for placement purposes. This value must be equal to the total number of placement groups, except for placement group splitting scenarios. Type Integer Required Yes. If not specified it is set to the default value. Default 8 replicated or erasure Description The pool type can be either replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability. The replicated pools require more raw storage but implement all Ceph operations. The erasure-coded pools require less raw storage but only implement a subset of the available operations. Type String Required No Default replicated CRUSH_RULE_NAME Description The name of the CRUSH rule for the pool. The rule MUST exist. For replicated pools, the name is the rule specified by the osd_pool_default_crush_rule configuration setting. For erasure-coded pools the name is erasure-code if you specify the default erasure code profile or POOL_NAME otherwise. Ceph creates this rule with the specified name implicitly if the rule does not already exist. Type String Required No Default Uses erasure-code for an erasure-coded pool. For replicated pools, it uses the value of the osd_pool_default_crush_rule variable from the Ceph configuration. EXPECTED_NUMBER_OBJECTS Description The expected number of objects for the pool. Ceph splits the placement groups at pool creation time to avoid the latency impact to perform runtime directory splitting. Type Integer Required No Default 0 , no splitting at the pool creation time. ERASURE_CODE_PROFILE Description For erasure-coded pools only. Use the erasure code profile. It must be an existing profile as defined by the osd erasure-code-profile set variable in the Ceph configuration file. For further information, see the Erasure Code Profiles section. Type String Required No When you create a pool, set the number of placement groups to a reasonable value, for example to 100 . Consider the total number of placement groups per OSD. Placement groups are computationally expensive, so performance degrades when you have many pools with many placement groups, for example, 50 pools with 100 placement groups each. The point of diminishing returns depends upon the power of the OSD host. Additional Resources See the Placement Groups section and Ceph Placement Groups (PGs) per Pool Calculator for details on calculating an appropriate number of placement groups for your pool. 4.4. Setting pool quota You can set pool quotas for the maximum number of bytes and the maximum number of objects per pool. Syntax Example To remove a quota, set its value to 0 . Note In-flight write operations might overrun pool quotas for a short time until Ceph propagates the pool usage across the cluster. This is normal behavior. Enforcing pool quotas on in-flight write operations would impose significant performance penalties. 4.5. Deleting a pool Delete a pool: Syntax Important To protect data, storage administrators cannot delete pools by default. Set the mon_allow_pool_delete configuration option before deleting pools. If a pool has its own rule, consider removing it after deleting the pool. If a pool has users strictly for its own use, consider deleting those users after deleting the pool. 4.6. Renaming a pool Rename a pool: Syntax If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities, that is caps, with the new pool name. 4.7. Migrating a pool Sometimes it is necessary to migrate all objects from one pool to another. This is done in cases such as needing to change parameters that cannot be modified on a specific pool. For example, needing to reduce the number of placement groups of a pool. Important When a workload is using only Ceph Block Device images, follow the procedures documented for moving and migrating a pool within the Red Hat Ceph Storage Block Device Guide : Moving images between pools Migrating pools The migration methods described for Ceph Block Device are more recommended than those documented here. using the cppool does not preserve all snapshots and snapshot related metadata, resulting in an unfaithful copy of the data. For example, copying an RBD pool does not completely copy the image. In this case, snaps are not present and will not work properly. The cppool also does not preserve the user_version field that some librados users may rely on. If migrating a pool is necessary and your user workloads contain images other than Ceph Block Devices, continue with one of the procedures documented here. Prerequisites If using the rados cppool command: Read-only access to the pool is required. Only use this command if you do not have RBD images and its snaps and user_version consumed by librados. If using the local drive RADOS commands, verify that sufficient cluster space is available. Two, three, or more copies of data will be present as per pool replication factor. Procedure Method one - the recommended direct way Copy all objects with the rados cppool command. Important Read-only access to the pool is required during copy. Syntax Example Method two - using a local drive Use the rados export and rados import commands and a temporary local directory to save all exported data. Syntax Example Required. Stop all I/O to the source pool. Required. Resynchronize all modified objects. Syntax Example 4.8. Viewing pool statistics Show a pool's utilization statistics: Example 4.9. Setting pool values Set a value to a pool: Syntax The Pool Values section lists all key-values pairs that you can set. 4.10. Getting pool values Get a value from a pool: Syntax You can view the list of all key-values pairs that you might get in the Pool Values section. 4.11. Enabling a client application Red Hat Ceph Storage provides additional protection for pools to prevent unauthorized types of clients from writing data to the pool. This means that system administrators must expressly enable pools to receive I/O operations from Ceph Block Device, Ceph Object Gateway, Ceph Filesystem or for a custom application. Enable a client application to conduct I/O operations on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device. rgw for the Ceph Object Gateway. Note Specify a different APP value for a custom application. Important A pool that is not enabled will generate a HEALTH_WARN status. In that scenario, the output for ceph health detail -f json-pretty gives the following output: Note Initialize pools for the Ceph Block Device with rbd pool init POOL_NAME . 4.12. Disabling a client application Disable a client application from conducting I/O operations on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device. rgw for the Ceph Object Gateway. Note Specify a different APP value for a custom application. 4.13. Setting application metadata Provides the functionality to set key-value pairs describing attributes of the client application. Set client application metadata on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different APP value for a custom application. 4.14. Removing application metadata Remove client application metadata on a pool: Syntax Where APP is: cephfs for the Ceph Filesystem. rbd for the Ceph Block Device rgw for the Ceph Object Gateway Note Specify a different APP value for a custom application. 4.15. Setting the number of object replicas Set the number of object replicas on a replicated pool: Syntax You can run this command for each pool. Important The NUMBER_OF_REPLICAS parameter includes the object itself. If you want to include the object and two copies of the object for a total of three instances of the object, specify 3 . Example Note An object might accept I/O operations in degraded mode with fewer replicas than specified by the pool size setting. To set a minimum number of required replicas for I/O, use the min_size setting. Example This ensures that no object in the data pool receives an I/O with fewer replicas than specified by the min_size setting. 4.16. Getting the number of object replicas Get the number of object replicas: Example Ceph lists the pools, with the replicated size attribute highlighted. By default, Ceph creates two replicas of an object, that is a total of three copies, or a size of 3 . 4.17. Pool values The following list contains key-values pairs that you can set or get. For further information, see the Set Pool Values and Getting Pool Values sections. Table 4.1. Available pool values Value Description Type Required Default size Specifies the number of replicas for objects in the pool. See the Setting the Number of Object Replicas section for further details. Applicable for the replicated pools only. Integer No None min_size Specifies the minimum number of replicas required for I/O. See the Setting the Number of Object Replicas section for further details. For erasure-coded pools, this should be set to a value greater than k . If I/O is allowed at the value k , then there is no redundancy and data is lost in the event of a permanent OSD failure. For more information, see Erasure code pools overview . Integer No None crash_replay_interval Specifies the number of seconds to allow clients to replay acknowledged, but uncommitted requests. Integer No None pg_num The total number of placement groups for the pool. See the Pool, placement groups, and CRUSH Configuration Reference section in the Red Hat Ceph Storage Configuration Guide for details on calculating a suitable number. The default value 8 is not suitable for most systems. Integer Yes 8 pgp-num The total number of placement groups for placement purposes. This should be equal to the total number of placement groups , except for placement group splitting scenarios. Valid range: Equal to or less than what specified by the pg_num variable. Integer Yes. Picks up default or Ceph configuration value if not specified. None crush_rule The rule to use for mapping object placement in the cluster. String Yes None hashpspool Enable or disable the HASHPSPOOL flag on a given pool. With this option enabled, pool hashing and placement group mapping are changed to improve the way pools and placement groups overlap. Valid settings: 1 enables the flag, 0 disables the flag. IMPORTANT: Do not enable this option on production pools of a cluster with a large amount of OSDs and data. All placement groups in the pool would have to be remapped causing too much data movement. Integer No None fast_read On a pool that uses erasure coding, if this flag is enabled, the read request issues subsequent reads to all shards, and waits until it receives enough shards to decode to serve the client. In the case of the jerasure and isa erasure plug-ins, once the first K replies return, the client's request is served immediately using the data decoded from these replies. This helps to allocate some resources for better performance. Currently this flag is only supported for erasure coding pools. Boolean No 0 allow_ec_overwrites Whether writes to an erasure coded pool can update part of an object, so the Ceph Filesystem and Ceph Block Device can use it. Boolean No None compression_algorithm Sets inline compression algorithm to use with the BlueStore storage backend. This setting overrides the bluestore_compression_algorithm configuration setting. Valid settings: lz4 , snappy , zlib , zstd String No None compression_mode Sets the policy for the inline compression algorithm for the BlueStore storage backend. This setting overrides the bluestore_compression_mode configuration setting. Valid settings: none , passive , aggressive , force String No None compression_min_blob_size BlueStore does not compress chunks smaller than this size. This setting overrides the bluestore_compression_min_blob_size configuration setting. Unsigned Integer No None compression_max_blob_size BlueStore breaks chunks larger than this size into smaller blobs of compression_max_blob_size before compressing the data. Unsigned Integer No None nodelete Set or unset the NODELETE flag on a given pool. Valid range: 1 sets flag. 0 unsets flag. Integer No None nopgchange Set or unset the NOPGCHANGE flag on a given pool. Integer No None nosizechange Set or unset the NOSIZECHANGE flag on a given pool. Valid range: 1 sets the flag. 0 unsets the flag. Integer No None write_fadvise_dontneed Set or unset the WRITE_FADVISE_DONTNEED flag on a given pool. Valid range: 1 sets the flag. 0 unsets the flag. Integer No None noscrub Set or unset the NOSCRUB flag on a given pool. Valid range: 1 sets the flag. 0 unsets the flag. Integer No None nodeep-scrub Set or unset the NODEEP_SCRUB flag on a given pool. Valid range: 1 sets the flag. 0 unsets the flag. Integer No None scrub_min_interval The minimum interval in seconds for pool scrubbing when load is low. If it is 0 , Ceph uses the osd_scrub_min_interval configuration setting. Double No 0 scrub_max_interval The maximum interval in seconds for pool scrubbing irrespective of cluster load. If it is 0, Ceph uses the osd_scrub_max_interval configuration setting. Double No 0 deep_scrub_interval The interval in seconds for pool 'deep' scrubbing. If it is 0 , Ceph uses the osd_deep_scrub_interval configuration setting. Double No 0 peering_crush_bucket_count The value is used along with peering_crush_bucket_barrier to determined whether the set of OSDs in the chosen acting set can peer with each other, based on the number of distinct buckets there are in the acting set. Integer No None peering crush_bucket_target This value is used along with peering_crush_bucket_barrier and size to calculate the value bucket_max which limits the number of OSDs in the same bucket from getting chose to be in the acting set of a PG. Integer No None peering crush_bucket_barrier The type of bucket a pool is stretched across. For example, rack, row, or datacenter. String No None | [
"ceph osd lspools",
"ceph config set global osd_pool_default_pg_num 250 ceph config set global osd_pool_default_pgp_num 250",
"ceph osd pool create POOL_NAME PG_NUM PGP_NUM [replicated] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]",
"ceph osd pool create POOL_NAME PG_NUM PGP_NUM erasure [ ERASURE_CODE_PROFILE ] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]",
"ceph osd pool create POOL_NAME [--bulk]",
"ceph osd pool set-quota POOL_NAME [max_objects OBJECT_COUNT ] [max_bytes BYTES ]",
"ceph osd pool set-quota data max_objects 10000",
"ceph osd pool delete POOL_NAME [ POOL_NAME --yes-i-really-really-mean-it]",
"ceph osd pool rename CURRENT_POOL_NAME NEW_POOL_NAME",
"ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados cppool SOURCE_POOL NEW_POOL ceph osd pool rename SOURCE_POOL NEW_SOURCE_POOL_NAME ceph osd pool rename NEW_POOL SOURCE_POOL",
"ceph osd pool create pool1 250 rados cppool pool2 pool1 ceph osd pool rename pool2 pool3 ceph osd pool rename pool1 pool2",
"ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados export --create SOURCE_POOL FILE_PATH rados import FILE_PATH NEW_POOL",
"ceph osd pool create pool1 250 rados export --create pool2 <path of export file> rados import <path of export file> pool1",
"rados export --workers 5 SOURCE_POOL FILE_PATH rados import --workers 5 FILE_PATH NEW_POOL",
"rados export --workers 5 pool2 <path of export file> rados import --workers 5 <path of export file> pool1",
"[ceph: root@host01 /] rados df",
"ceph osd pool set POOL_NAME KEY VALUE",
"ceph osd pool get POOL_NAME KEY",
"ceph osd pool application enable POOL_NAME APP {--yes-i-really-mean-it}",
"{ \"checks\": { \"POOL_APP_NOT_ENABLED\": { \"severity\": \"HEALTH_WARN\", \"summary\": { \"message\": \"application not enabled on 1 pool(s)\" }, \"detail\": [ { \"message\": \"application not enabled on pool '_POOL_NAME_'\" }, { \"message\": \"use 'ceph osd pool application enable _POOL_NAME_ _APP_', where _APP_ is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.\" } ] } }, \"status\": \"HEALTH_WARN\", \"overall_status\": \"HEALTH_WARN\", \"detail\": [ \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\" ] }",
"ceph osd pool application disable POOL_NAME APP {--yes-i-really-mean-it}",
"ceph osd pool application set POOL_NAME APP KEY",
"ceph osd pool application rm POOL_NAME APP KEY",
"ceph osd pool set POOL_NAME size NUMBER_OF_REPLICAS",
"ceph osd pool set data size 3",
"ceph osd pool set data min_size 2",
"ceph osd dump | grep 'replicated size'"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/storage_strategies_guide/pools-overview_strategy |
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y | Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/updating_openshift_data_foundation/updating-zstream-odf_rhodf |
4.13. Context-Dependent Path Names | 4.13. Context-Dependent Path Names Context-Dependent Path Names (CDPNs) allow symbolic links to be created that point to variable destination files or directories. The variables are resolved to real files or directories each time an application follows the link. The resolved value of the link depends on the node or user following the link. CDPN variables can be used in any path name, not just with symbolic links. However, the CDPN variable name cannot be combined with other characters to form an actual directory or file name. The CDPN variable must be used alone as one segment of a complete path. Usage For a Normal Symbolic Link Target Specifies an existing file or directory on a file system. LinkName Specifies a name to represent the real file or directory on the other end of the link. For a Variable Symbolic Link Variable Specifies a special reserved name from a list of values (refer to Table 4.5, "CDPN Variable Values" ) to represent one of multiple existing files or directories. This string is not the name of an actual file or directory itself. (The real files or directories must be created in a separate step using names that correlate with the type of variable used.) LinkName Specifies a name that will be seen and used by applications and will be followed to get to one of the multiple real files or directories. When LinkName is followed, the destination depends on the type of variable and the node or user doing the following. Table 4.5. CDPN Variable Values Variable Description @hostname This variable resolves to a real file or directory named with the hostname string produced by the output of the following command: echo `uname -n` @mach This variable resolves to a real file or directory name with the machine-type string produced by the output of the following command: echo `uname -m` @os This variable resolves to a real file or directory named with the operating-system name string produced by the output of the following command: echo `uname -s` @sys This variable resolves to a real file or directory named with the combined machine type and OS release strings produced by the output of the following command: echo `uname -m`_`uname -s` @uid This variable resolves to a real file or directory named with the user ID string produced by the output of the following command: echo `id -u` @gid This variable resolves to a real file or directory named with the group ID string produced by the output of the following command: echo `id -g` Example In this example, there are three nodes with hostnames n01 , n02 and n03 . Applications on each node uses directory /gfs/log/ , but the administrator wants these directories to be separate for each node. To do this, no actual log directory is created; instead, an @hostname CDPN link is created with the name log . Individual directories /gfs/n01/ , /gfs/n02/ , and /gfs/n03/ are created that will be the actual directories used when each node references /gfs/log/ . | [
"ln -s Target LinkName",
"ln -s Variable LinkName",
"n01# cd /gfs n01# mkdir n01 n02 n03 n01# ln -s @hostname log n01# ls -l /gfs lrwxrwxrwx 1 root root 9 Apr 25 14:04 log -> @hostname/ drwxr-xr-x 2 root root 3864 Apr 25 14:05 n01/ drwxr-xr-x 2 root root 3864 Apr 25 14:06 n02/ drwxr-xr-x 2 root root 3864 Apr 25 14:06 n03/ n01# touch /gfs/log/fileA n02# touch /gfs/log/fileB n03# touch /gfs/log/fileC n01# ls /gfs/log/ fileA n02# ls /gfs/log/ fileB n03# ls /gfs/log/ fileC"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-pathnames |
Chapter 13. Customizing GNOME Desktop Features | Chapter 13. Customizing GNOME Desktop Features This chapter mentions three key desktop features. After reading, you will know how to quickly terminate the X server by default for all users, how to enable the Compose key or how to disable command line access for the users. To make sure the changes you have made take effect, you need to update the dconf utility. The users will experience the difference when they log out and log in again. 13.1. Allowing and Disallowing Online Accounts The GNOME Online Accounts (GOA) are used for setting personal network accounts which are then automatically integrated with the GNOME Desktop and applications. The user can add their online accounts, such as Google, Facebook, Flickr, ownCloud, and others using the Online Accounts application. As a system administrator, you can enable all online accounts; selectively enable a few online accounts; disable all online accounts. Procedure 13.1. Configuring Online Accounts If you do not have the gnome-online-accounts package on your system, install it by running the following command as root: Create a keyfile for the local database in /etc/dconf/db/local.d/ goa , which contains the following configuration: For selectively enabling a few providers only: For disabling all providers: For allowing all available providers: Lock down the settings to prevent users from overriding them. If it does not exist, create a new directory named /etc/dconf/db/local.d/locks/ . Create a new file in /etc/dconf/db/local.d/locks/goa with the following contents: Update the system databases for the changes to take effect: Users must log out and back in again before the system-wide settings take effect. | [
"yum install gnome-online-accounts",
"[org/gnome/online-accounts] whitelisted-providers= ['google', 'facebook']",
"[org/gnome/online-accounts] whitelisted-providers= ['']",
"[org/gnome/online-accounts] whitelisted-providers= ['all']",
"Prevent users from changing values for the following key: /org/gnome/online-accounts",
"dconf update"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/customize-gnome-desktop-features |
Chapter 6. Deploying the Shared File Systems service with native CephFS | Chapter 6. Deploying the Shared File Systems service with native CephFS CephFS is the highly scalable, open-source, distributed file system component of Red Hat Ceph Storage, a unified distributed storage platform. Ceph Storage implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph Storage cluster. The Shared File Systems service (manila) enables users to create shares in CephFS and access them using the native Ceph FS protocol. The Shared File Systems service manages the life cycle of these shares from within OpenStack. With this release, director can deploy the Shared File Systems with a native CephFS back end on the overcloud. Important This chapter pertains to the deployment and use of native CephFS to provide a self-service Shared File Systems service in your Red Hat OpenStack Platform(RHOSP) cloud through the native CephFS NAS protocol. This type of deployment requires guest VM access to Ceph public network and infrastructure. Deploy native CephFS with trusted OpenStack Platform tenants only, because it requires a permissive trust model that is not suitable for general purpose OpenStack Platform deployments. For general purpose OpenStack Platform deployments that use a conventional tenant trust model, you can deploy CephFS through the NFS protocol. 6.1. CephFS with native driver The CephFS native driver combines the OpenStack Shared File Systems service (manila) and Red Hat Ceph Storage. When you use Red Hat OpenStack (RHOSP) director, the Controller nodes host the Ceph daemons, such as the manager, metadata servers (MDS), and monitors (MON) and the Shared File Systems services. Compute nodes can host one or more projects. Projects, which were formerly referred to as tenants, are represented in the following graphic by the white boxes. Projects contain user-managed VMs, which are represented by gray boxes with two NICs. To access the ceph and manila daemons projects, connect to the daemons over the public Ceph storage network. On this network, you can access data on the storage nodes provided by the Ceph Object Storage Daemons (OSDs). Instances, or virtual machines (VMs), that are hosted on the project boot with two NICs: one dedicated to the storage provider network and the second to project-owned routers to the external provider network. The storage provider network connects the VMs that run on the projects to the public Ceph storage network. The Ceph public network provides back-end access to the Ceph object storage nodes, metadata servers (MDS), and Controller nodes. Using the native driver, CephFS relies on cooperation with the clients and servers to enforce quotas, guarantee project isolation, and for security. CephFS with the native driver works well in an environment with trusted end users on a private cloud. This configuration requires software that is running under user control to cooperate and work correctly. 6.2. Native CephFS back-end security The native CephFS back end requires a permissive trust model for Red Hat OpenStack Platform (RHOSP) tenants. This trust model is not appropriate for general purpose OpenStack Platform clouds that deliberately block users from directly accessing the infrastructure behind the services that the OpenStack Platform provides. With native CephFS, user Compute instances connect directly to the Ceph public network where the Ceph service daemons are exposed. CephFS clients that run on user VMs interact cooperatively with the Ceph service daemons, and they interact directly with RADOS to read and write file data blocks. CephFS quotas, which enforce Shared File Systems (manila) share sizes, are enforced on the client side, such as on VMs that are owned by (RHOSP) users. The client side software on user VMs might not be current, which can leave critical cloud infrastructure vulnerable to malicious or inadvertently harmful software that targets the Ceph service ports. Deploy native CephFS as a back end only in environments in which trusted users keep client-side software up to date. Ensure that no software that can impact the Red Hat Ceph Storage infrastructure runs on your VMs. For a general purpose RHOSP deployment that serves many untrusted users, deploy CephFS-NFS. For more information about using CephFS-NFS, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Users might not keep client-side software current, and they might fail to exclude harmful software from their VMs, but using CephFS-NFS, they only have access to the public side of an NFS server, not to the Ceph infrastructure itself. NFS does not require the same kind of cooperative client and, in the worst case, an attack from a user VM can damage the NFS gateway without damaging the Ceph Storage infrastructure behind it. You can expose the native CephFS back end to all trusted users, but you must enact the following security measures: Configure the storage network as a provider network. Impose role-based access control (RBAC) policies to secure the Storage provider network. Create a private share type. 6.3. Native CephFS deployment A typical native Ceph file system (CephFS) installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following components: RHOSP Controller nodes that run containerized Ceph metadata server (MDS), Ceph monitor (MON) and Shared File Systems (manila) services. Some of these services can coexist on the same node or they can have one or more dedicated nodes. Ceph Storage cluster with containerized object storage daemons (OSDs) that run on Ceph Storage nodes. An isolated storage network that serves as the Ceph public network on which the clients can communicate with Ceph service daemons. To facilitate this, the storage network is made available as a provider network for users to connect their VMs and mount CephFS shares. Important You cannot use the Shared File Systems service (manila) with the CephFS native driver to serve shares to OpenShift Container Platform through Manila CSI, because Red Hat does not support this type of deployment. For more information, contact Red Hat Support. The Shared File Systems (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , allows the Shared File Systems service to use native CephFS as a back end. You can install native CephFS in an integrated deployment managed by director. When director deploys the Shared File Systems service with a CephFS back end on the overcloud, it automatically creates the required data center storage network. However, you must create the corresponding storage provider network on the overcloud. For more information about network planning, see Overcloud networks in Installing and managing Red Hat OpenStack Platform with director . Although you can manually configure the Shared File Systems service by editing the /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf file for the node, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. Red Hat only supports deployments of the Shared File Systems service that are managed by director. 6.4. Requirements You can deploy a native CephFS back end with new or existing Red Hat OpenStack Platform (RHOSP) environments if you meet the following requirements: Use Red Hat OpenStack Platform version 17.0 or later. Configure a new Red Hat Ceph Storage cluster at the same time as the native CephFS back end. For information about how to deploy Ceph Storage, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Important The RHOSP Shared File Systems service (manila) with the native CephFS back end is supported for use with Red Hat Ceph Storage version 5.2 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Install the Shared File Systems service on a Controller node. This is the default behavior. Use only a single instance of a CephFS back end for the Shared File Systems service. 6.5. File shares The Shared File Systems service (manila), Ceph File System (CephFS), and CephFS-NFS manage shares differently. The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect. CephFS manages a share like a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size of the share that the Shared File Systems service creates. You control access to native CephFS shares by using Metadata Service (MDS) authentication capabilities. With native CephFS, file shares are provisioned and accessed through the CephFS protocol. Access control is performed with a CephX authentication scheme that uses CephFS usernames. 6.6. Network isolation for native CephFS Native CephFS deployments use the isolated storage network deployed by director as the Red Hat Ceph Storage public network. Clients use this network to communicate with various Ceph Storage infrastructure service daemons. For more information about isolating networks, see Network isolation in Installing and managing Red Hat OpenStack Platform with director . 6.7. Deploying the native CephFS environment When you are ready to deploy the environment, use the openstack overcloud deploy command with the custom environments and roles required to configure the native CephFS back end. The openstack overcloud deploy command has the following options in addition to other required options. Action Option Additional Information Specify the network configuration with network_data.yaml [filename] -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml You can use a custom environment file to override values for the default networks specified in this network data environment file. This is the default network data file that is available when you use isolated networks. You can omit this file from the openstack overcloud deploy command for brevity. Deploy the Ceph daemons. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the Ceph metadata server with ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the manila service with the native CephFS back end. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml Environment file The following example shows an openstack overcloud deploy command that includes options to deploy a Ceph cluster, Ceph MDS, the native CephFS back end, and the networks required for the Ceph cluster: For more information about the openstack overcloud deploy command, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director . 6.8. Native CephFS back-end environment file The environment file for defining a native CephFS back end, manila-cephfsnative-config.yaml is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . The manila-cephfsnative-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings should work for most environments. The example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry . This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option for CephFS back ends. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster. 4 ManilaCephFSCephFSEnableSnapshots controls snapshot activation. Snapshots are supported With Ceph Storage 4.1 and later, but the value of this parameter defaults to false . You can set the value to true to ensure that the driver reports the snapshot_support capability to the manila scheduler. 5 ManilaCephFSCephVolumeMode controls the UNIX permissions to set against the manila share created on the native CephFS back end. The value defaults to 755 . 6 ManilaCephFSCephFSProtocolHelperType must be set to CEPHFS to use the native CephFS driver. For more information about environment files, see Environment Files in the Installing and managing Red Hat OpenStack Platform with director guide. | [
"[stack@undercloud ~]USD openstack overcloud deploy -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml",
"[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml A Heat environment file which can be used to enable a a Manila CephFS Native driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: true 4 ManilaCephFSCephVolumeMode: '0755' 5 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'CEPHFS' 6"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_deploying-the-shared-file-systems-service-with-native-cephfs_deployingcontainerizedrhcs |
Chapter 3. Creating an IBM Power Virtual Server workspace | Chapter 3. Creating an IBM Power Virtual Server workspace 3.1. Creating an IBM Power Virtual Server workspace Use the following procedure to create an IBM Power(R) Virtual Server workspace. Procedure To create an IBM Power(R) Virtual Server workspace, complete step 1 to step 5 from the IBM Cloud(R) documentation for Creating an IBM Power(R) Virtual Server . After it has finished provisioning, retrieve the 32-character alphanumeric Globally Unique Identifier (GUID) of your new workspace by entering the following command: USD ibmcloud resource service-instance <workspace name> 3.2. steps Installing a cluster on IBM Power(R) Virtual Server with customizations | [
"ibmcloud resource service-instance <workspace name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_power_virtual_server/creating-ibm-power-vs-workspace |
Chapter 2. An overview of OpenShift Data Foundation architecture | Chapter 2. An overview of OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation provides services for, and can run internally from Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for the Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see the interoperability matrix . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/an-overview-of-openshift-data-foundation-architecture_rhodf |
18.3.3. iptables Parameter Options | 18.3.3. iptables Parameter Options Once certain iptables commands are specified, including those used to add, append, delete, insert, or replace rules within a particular chain, parameters are required to construct a packet filtering rule. -c - Resets the counters for a particular rule. This parameter accepts the PKTS and BYTES options to specify what counter to reset. -d - Sets the destination hostname, IP address, or network of a packet that matches the rule. When matching a network, the following IP address/netmask formats are supported: N.N.N.N / M.M.M.M - Where N.N.N.N is the IP address range and M.M.M.M is the netmask. N.N.N.N / M - Where N.N.N.N is the IP address range and M is the bitmask. -f - Applies this rule only to fragmented packets. By using the exclamation point character ( ! ) option after this parameter, only unfragmented packets are matched. -i - Sets the incoming network interface, such as eth0 or ppp0 . With iptables , this optional parameter may only be used with the INPUT and FORWARD chains when used with the filter table and the PREROUTING chain with the nat and mangle tables. This parameter also supports the following special options: Exclamation point character ( ! ) - Reverses the directive, meaning any specified interfaces are excluded from this rule. Plus character ( + ) - A wildcard character used to match all interfaces that match the specified string. For example, the parameter -i eth+ would apply this rule to any Ethernet interfaces but exclude any other interfaces, such as ppp0 . If the -i parameter is used but no interface is specified, then every interface is affected by the rule. -j - Jumps to the specified target when a packet matches a particular rule. Valid targets to use after the -j option include standard options ( ACCEPT , DROP , QUEUE , and RETURN ) as well as extended options that are available through modules loaded by default with the Red Hat Enterprise Linux iptables RPM package, such as LOG , MARK , and REJECT , among others. Refer to the iptables man page for more information about these and other targets. It is also possible to direct a packet matching this rule to a user-defined chain outside of the current chain so that other rules can be applied to the packet. If no target is specified, the packet moves past the rule with no action taken. However, the counter for this rule increases by one. -o - Sets the outgoing network interface for a rule and may only be used with OUTPUT and FORWARD chains in the filter table, and the POSTROUTING chain in the nat and mangle tables. This parameter's options are the same as those of the incoming network interface parameter ( -i ). -p - Sets the IP protocol for the rule, which can be either icmp , tcp , udp , or all , to match every supported protocol. In addition, any protocols listed in /etc/protocols may also be used. If this option is omitted when creating a rule, the all option is the default. -s - Sets the source for a particular packet using the same syntax as the destination ( -d ) parameter. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-iptables-options-parameters |
Preface | Preface If you have data stored in an S3-compatible object store such as Ceph, MinIO, or IBM Cloud Object Storage, you can access the data from your workbench. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/pr01 |
Chapter 1. Content patching overview | Chapter 1. Content patching overview Patching leverages Red Hat software and management automation expertise to enable consistent patch workflows for Red Hat Enterprise Linux (RHEL) systems across the open hybrid cloud. It provides a single canonical view of applicable advisories across all of your deployments, whether they be Red Hat Satellite, hosted Red Hat Subscription Management (RHSM), or the public cloud. Use content patching in Insights to see all of the applicable Red Hat and Extra Packages for Enterprise Linux (EPEL) advisories for your RHEL systems checking into Insights. patch any system with one or more advisories by using remediation playbooks. see package updates available for Red Hat and non-Red Hat repositories as of the last system checkin. Your host must be running Red Hat Enterprise Linux (RHEL) 7, RHEL 8.6+ or RHEL 9 and it must maintain a fresh yum/dnf cache. Note Configure role-based access control (RBAC) in Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Users . See User Access Configuration Guide for Role-based Access Control (RBAC) for more information about this feature and example use cases. 1.1. Criteria for patch and vulnerability errata The content patching function collects a variety of data to create meaningful and actionable errata for your systems. The Insights client collects the following data on each checkin: List of installed packages, including name, epoch, version, release, and architecture (NEVRA) List of enabled modules (RHEL 8 and later) List of enabled repositories Output of yum updateinfo -C or dnf updateinfo -C Release version from systems with a version lock System architecture (eg. x86_64 ) Additionally, Insights for Red Hat Enterprise Linux collects metadata from the following data sources: Metadata from product repositories delivered by the Red Hat Content Delivery Network (CDN) Metadata from Extra Packages for Enterprise Linux (EPEL) repositories Red Hat Open Vulnerability and Assessment Language (OVAL) feed Insights for Red Hat Enterprise Linux compares the set of system data to the collected errata and vulnerability metadata in order to generate a set of available updates for each system. These updates include package updates, Red Hat errata, and Common Vulnerabilities and Exposures (CVEs). Additional resources For more information about Common Vulnerabilities and Exposures (CVEs), refer to the following resources: Assessing and Monitoring Security Vulnerabilities on RHEL Systems Security > Vulnerability > CVEs 1.2. Reviewing and filtering applicable advisories and systems in the inventory You can see all of the applicable advisories and installed packages for systems checking into Red Hat Insights for Red Hat Enterprise Linux. Procedure On Red Hat Hybrid Cloud Console , navigate to Content > Advisories . You can also search for advisories by name using the search box, and filter advisories by: Type - Security, Bugfix, Enhancement, Unknown Publish date - Last 7 days, 30 days, 90 days, Last year, or More than 1 year ago Navigate to Content > Systems to see a list of affected systems you can patch with applicable advisories. You can also search for specific systems using the search box. Navigate to Content > Packages to see a list of packages with updates available in your environment. You can also search for specific packages using the search box. 1.3. System patching using Insights remediation playbooks The following steps demonstrate the patching workflow from the Content > Advisories page in Red Hat Insights for Red Hat Enterprise Linux: Procedure On Red Hat Hybrid Cloud Console , navigate to Content > Advisories . Click the advisory you want to apply to affected systems. You will see a description of the advisory, a link to view packages and errata at access.redhat.com, and a list of affected systems. The total number of applicable advisories of each type (Security, Bugfix, Enhancement) against each system are also displayed. As a bulk operation, you can click the options menu located to a system, then click Apply all applicable advisories to patch the system with all applicable advisories at once. Alternatively, select the system(s) you want to patch with this particular advisory, then click Remediate . On the Remediate with Ansible page, you can choose to modify an existing Playbook or create a new one to remediate with Ansible. Accordingly, select Existing Playbook and the playbook name from the drop-down list, then click . Or, select Create new Playbook and enter a name for your playbook, then click . You will then see a summary of the action and resolution. Your system will auto reboot by default. If you desire to disable this functionality, click on the blue link that states "turn off auto reboot." Click Submit . On the left navigation, click on Remediations . Click on the playbook name to see the playbook details, or simply select and click Download playbook . The following steps demonstrate the patching workflow from the Content > Systems page: Click the Systems tab to see a list of affected systems. As a bulk operation, you can click the options menu located to a system, then click Apply all applicable advisories to patch the system with all applicable advisories at once. Alternatively, click the system you want to patch. You will see the system details and a list of applicable advisories for remediation, along with additional details such as the advisory publish date, type, and synopsis. Select the advisories you want to apply to the system, then click Remediate . On the Remediate with Ansible page, you can either modify an existing Playbook or create a new one to remediate with Ansible. Accordingly, click Existing Playbook and select the playbook name from the drop-down list, then click . Or, click Create new Playbook , enter a name for your playbook, then click . You will then see a summary of the action and resolution. Your system will auto reboot by default. If you desire to disable this functionality, click on the blue link that states "turn off auto reboot." Click Submit . On the left navigation, click on Remediations . Click on the playbook name to see the playbook details, or simply select and click Download playbook . Important Review and test any recommended actions and the playbook, and if you deem appropriate, deploy on your systems running Red Hat software. Red Hat is not responsible for any adverse outcomes related to these recommendations or Playbooks. 1.4. Updating errata for systems managed by Red Hat Satellite Insights for Red Hat Enterprise Linux calculates applicable updates based on the packages, repositories, and modules that a system reports when it checks in. Insights combines these results with a client-side evaluation, and stores the resulting superset of updates as applicable updates. A system check-in to Red Hat Insights includes the following content-related data: Installed packages Enabled repositories Enabled modules List of updates, which the client determines using the dnf updateinfo -C command. This command primarily captures package updates for non-Red Hat repositories Insights uses this collection of data to calculate applicable updates for the system. Sometimes Insights calculates applicable updates for systems managed by Red Hat Satellite and reports inaccurate results. This issue can manifest in two ways: Insights shows installable updates that cannot be installed on the Satellite-managed system. Insights shows applicable updates that match what can be installed on the system immediately after patching, but shows outdated or missing updates a day or two later. This can occur when the system is subscribed to RHEL repositories that have been renamed. Insights now provides an optional check-in command to provide accurate reporting for applicable updates on Satellite-managed systems. This option rebuilds the yum/dnf package caches and creates a refreshed list of applicable updates for the system. Note Satellite-managed systems are not eligible to have Red Hat Insights content templates applied. Prerequisites Admin-level access to the system Procedure To rebuild the package caches from the command line, enter the following command: The command regenerates the dnf/yum caches and collects the relevant installable errata from Satellite. insights-client then generates a refreshed list of updates and sends it to Insights. Note The generated list of updates is equivalent to the output from the command dnf updateinfo list . 1.4.1. Configuring automatic check-in for insights-client You can edit the insights-client configuration file on your system ( /etc/insights-client/insights-client.conf ) to rebuild the package caches automatically each time the system checks in to Insights. Procedure Open the /etc/insights-client/insights-client.conf file in a text editor. Look in the file for the following comment: Add the following line after the comment: Save your edits and exit the editor. When the system checks in to Satellite, insights-client executes a yum/dnf cache refresh before collecting the output of the client-side evaluation. Insights then reports the client-side evaluation output as installable updates. The evaluation output, based on what has been published to the CDN, is reported as applicable updates. Additional resources For more information about the --build-packagecache options, see the following KCS article: https://access.redhat.com/solutions/7041171 For more information about managing errata in Red Hat Satellite, see https://access.redhat.com/documentation/en-us/red_hat_satellite/6.15/html/managing_content/managing_errata_content-management . 1.5. Enabling notifications and integrations You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever the patch service detects an issue and generates an advisory. Using the notifications service frees you from having to continually check the Red Hat Insights for Red Hat Enterprise Linux dashboard for advisories. For example, you can configure the notifications service to automatically send an email message whenever the patch service generates an advisory. Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User Access group with the Notifications-administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization Administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route patch advisories to the application dashboard Additional resources For more information about how to set up notifications for patch advisories, see Configuring notifications on the Red Hat Hybrid Cloud Console . | [
"insights-client --build-packagecache",
"#Set build_packagecache=True to refresh the yum/dnf cache during the insights-client check-in",
"build_packagecache=True"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/managing_system_content_and_patch_updates_with_red_hat_insights/patch-service-overview |
Chapter 2. OpenShift Container Platform overview | Chapter 2. OpenShift Container Platform overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. OpenShift Container Platform enables you to do the following: Provide developers and IT organizations with cloud application platforms that can be used for deploying applications on secure and scalable resources. Require minimal configuration and management overhead. Bring the Kubernetes platform to customer data centers and cloud. Meet security, privacy, compliance, and governance requirements. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1. Glossary of common terms for OpenShift Container Platform This glossary defines common Kubernetes and OpenShift Container Platform terms. Kubernetes Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. Containers Containers are application instances and components that run in OCI-compliant containers on the worker nodes. A container is the runtime of an Open Container Initiative (OCI)-compliant image. An image is a binary application. A worker node can run many containers. A node capacity is related to memory and CPU capabilities of the underlying resources whether they are cloud, hardware, or virtualized. Pod A pod is one or more containers deployed together on one host. It consists of a colocated group of containers with shared resources such as volumes and IP addresses. A pod is also the smallest compute unit defined, deployed, and managed. In OpenShift Container Platform, pods replace individual application containers as the smallest deployable unit. Pods are the orchestrated unit in OpenShift Container Platform. OpenShift Container Platform schedules and runs all containers in a pod on the same node. Complex applications are made up of many pods, each with their own containers. They interact externally and also with another inside the OpenShift Container Platform environment. Replica set and replication controller The Kubernetes replica set and the OpenShift Container Platform replication controller are both available. The job of this component is to ensure the specified number of pod replicas are running at all times. If pods exit or are deleted, the replica set or replication controller starts more. If more pods are running than needed, the replica set deletes as many as necessary to match the specified number of replicas. Deployment and DeploymentConfig OpenShift Container Platform implements both Kubernetes Deployment objects and OpenShift Container Platform DeploymentConfigs objects. Users may select either. Deployment objects control how an application is rolled out as pods. They identify the name of the container image to be taken from the registry and deployed as a pod on a node. They set the number of replicas of the pod to deploy, creating a replica set to manage the process. The labels indicated instruct the scheduler onto which nodes to deploy the pod. The set of labels is included in the pod definition that the replica set instantiates. Deployment objects are able to update the pods deployed onto the worker nodes based on the version of the Deployment objects and the various rollout strategies for managing acceptable application availability. OpenShift Container Platform DeploymentConfig objects add the additional features of change triggers, which are able to automatically create new versions of the Deployment objects as new versions of the container image are available, or other changes. Service A service defines a logical set of pods and access policies. It provides permanent internal IP addresses and hostnames for other applications to use as pods are created and destroyed. Service layers connect application components together. For example, a front-end web service connects to a database instance by communicating with its service. Services allow for simple internal load balancing across application components. OpenShift Container Platform automatically injects service information into running containers for ease of discovery. Route A route is a way to expose a service by giving it an externally reachable hostname, such as www.example.com. Each route consists of a route name, a service selector, and optionally a security configuration. A router can consume a defined route and the endpoints identified by its service to provide a name that lets external clients reach your applications. While it is easy to deploy a complete multi-tier application, traffic from anywhere outside the OpenShift Container Platform environment cannot reach the application without the routing layer. Build A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform leverages Kubernetes by creating containers from build images and pushing them to the integrated registry. Project OpenShift Container Platform uses projects to allow groups of users or developers to work together, serving as the unit of isolation and collaboration. It defines the scope of resources, allows project administrators and collaborators to manage resources, and restricts and tracks the user's resources with quotas and limits. A project is a Kubernetes namespace with additional annotations. It is the central vehicle for managing access to resources for regular users. A project lets a community of users organize and manage their content in isolation from other communities. Users must receive access to projects from administrators. But cluster administrators can allow developers to create their own projects, in which case users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Projects are also known as namespaces. Operators An Operator is a Kubernetes-native application. The goal of an Operator is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations or shell scripts or automation software such as Ansible. It was outside your Kubernetes cluster and hard to integrate. With Operators, all of this changes. Operators are purpose-built for your applications. They implement and automate common Day 1 activities such as installation and configuration as well as Day 2 activities such as scaling up and down, reconfiguration, updates, backups, fail overs, and restores in a piece of software running inside your Kubernetes cluster by integrating natively with Kubernetes concepts and APIs. This is called a Kubernetes-native application. With Operators, applications must not be treated as a collection of primitives, such as pods, deployments, services, or config maps. Instead, Operators should be treated as a single object that exposes the options that make sense for the application. 2.2. Understanding OpenShift Container Platform OpenShift Container Platform is a Kubernetes environment for managing the lifecycle of container-based applications and their dependencies on various computing platforms, such as bare metal, virtualized, on-premise, and in cloud. OpenShift Container Platform deploys, configures and manages containers. OpenShift Container Platform offers usability, stability, and customization of its components. OpenShift Container Platform utilises a number of computing resources, known as nodes. A node has a lightweight, secure operating system based on Red Hat Enterprise Linux (RHEL), known as Red Hat Enterprise Linux CoreOS (RHCOS). After a node is booted and configured, it obtains a container runtime, such as CRI-O or Docker, for managing and running the images of container workloads scheduled to it. The Kubernetes agent, or kubelet schedules container workloads on the node. The kubelet is responsible for registering the node with the cluster and receiving the details of container workloads. OpenShift Container Platform configures and manages the networking, load balancing and routing of the cluster. OpenShift Container Platform adds cluster services for monitoring the cluster health and performance, logging, and for managing upgrades. The container image registry and OperatorHub provide Red Hat certified products and community built softwares for providing various application services within the cluster. These applications and services manage the applications deployed in the cluster, databases, frontends and user interfaces, application runtimes and business automation, and developer services for development and testing of container applications. You can manage applications within the cluster either manually by configuring deployments of containers running from pre-built images or through resources known as Operators. You can build custom images from pre-build images and source code, and store these custom images locally in an internal, private or public registry. The Multicluster Management layer can manage multiple clusters including their deployment, configuration, compliance and distribution of workloads in a single console. 2.3. Installing OpenShift Container Platform The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain. For more information about the installation process, the supported platforms, and choosing a method of installing and preparing your cluster, see the following: OpenShift Container Platform installation overview Installation process Supported platforms for OpenShift Container Platform clusters Selecting a cluster installation type 2.3.1. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 2.4. Steps 2.4.1. For developers Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications . Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base. Use the developer CLI tool ( odo ) : The odo CLI tool lets developers create single or multi-component applications and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration, and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams working on microservices-based architecture. Deploy Helm charts : Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Manage deployments using the Workloads page or OpenShift CLI ( oc ). Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.14. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.14. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Learn about OpenShift Container Platform application programming interface endpoints. 2.4.2. For administrators Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.14 control plane. See how OpenShift Container Platform control plane and worker nodes are managed and updated through the Machine API and Operators . Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Using the OpenShift Update Service in a disconnected environement : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Monitor clusters : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/getting_started/openshift-overview |
Using the AMQ Python Client | Using the AMQ Python Client Red Hat AMQ 2021.Q1 For Use with AMQ Clients 2.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/index |
7.132. nc | 7.132. nc 7.132.1. RHEA-2014:1968 - nc bug fix update Updated nc packages that add two enhancements are now available for Red Hat Enterprise Linux 6. The nc packages contain the nc (or netcat) utility for reading and writing data across network connections, by using the TCP and UDP protocols. Also, netcat can be used as a feature-rich network debugging and exploration tool, as netcat can create many different connections and has numerous built-in capabilities. Enhancements BZ# 1000773 With this update, the netcat utility can handle HTTP/1.1 proxy responses, which certain proxies send in response to HTTP/1.0 requests. BZ# 1064755 This update improves the phrasing of comments that contained profanities in certain sections in scripts provided by the netcat utility. Users of nc are advised to upgrade to these updated packages, which add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-nc |
Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift | Chapter 2. Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift You install Red Hat OpenStack Services on OpenShift (RHOSO) on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. To prepare for installing and deploying your RHOSO environment, you must configure the RHOCP worker nodes and the RHOCP networks on your RHOCP cluster. 2.1. Configuring Red Hat OpenShift Container Platform nodes for a Red Hat OpenStack Platform deployment Red Hat OpenStack Services on OpenShift (RHOSO) services run on Red Hat OpenShift Container Platform (RHOCP) worker nodes. By default, the OpenStack Operator deploys RHOSO services on any worker node. You can use node labels in your OpenStackControlPlane custom resource (CR) to specify which RHOCP nodes host the RHOSO services. By pinning some services to specific infrastructure nodes rather than running the services on all of your RHOCP worker nodes, you optimize the performance of your deployment. You can create labels for the RHOCP nodes, or you can use the existing labels, and then specify those labels in the OpenStackControlPlane CR by using the nodeSelector field. For example, the Block Storage service (cinder) has different requirements for each of its services: The cinder-scheduler service is a very light service with low memory, disk, network, and CPU usage. The cinder-api service has high network usage due to resource listing requests. The cinder-volume service has high disk and network usage because many of its operations are in the data path, such as offline volume migration, and creating a volume from an image. The cinder-backup service has high memory, network, and CPU requirements. Therefore, you can pin the cinder-api , cinder-volume , and cinder-backup services to dedicated nodes and let the OpenStack Operator place the cinder-scheduler service on a node that has capacity. Additional resources Placing pods on specific nodes using node selectors Machine configuration overview Node Feature Discovery Operator 2.2. Creating a storage class You must create a storage class for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end to provide persistent volumes to Red Hat OpenStack Services on OpenShift (RHOSO) pods. Use the Logical Volume Manager (LVM) Storage storage class with RHOSO. You specify this storage class as the cluster storage back end for the RHOSO deployment. Use a storage back end based on SSD or NVMe drives for the storage class. If you are using LVM, you must wait until the LVM Storage Operator announces that the storage is available before creating the control plane. The LVM Storage Operator announces that the cluster and LVMS storage configuration is complete through the annotation for the volume group to the worker node object. If you deploy pods before all the control plane nodes are ready, then multiple PVCs and pods are scheduled on the same nodes. To check that the storage is ready, you can query the nodes in your lvmclusters.lvm.topolvm.io object. For example, run the following command if you have three worker nodes and your device class for the LVM Storage Operator is named "local-storage": The storage is ready when this command returns three non-zero values For more information about how to configure the LVM Storage storage class, see Persistent storage using Logical Volume Manager Storage in the RHOCP Storage guide. 2.3. Creating the openstack namespace You must create a namespace within your Red Hat OpenShift Container Platform (RHOCP) environment for the service pods of your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The service pods of each RHOSO deployment exist in their own namespace within the RHOCP environment. Prerequisites You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. Procedure Create the openstack project for the deployed RHOSO environment: Ensure the openstack namespace is labeled to enable privileged pod creation by the OpenStack Operators: If the security context constraint (SCC) is not "privileged", use the following commands to change it: Optional: To remove the need to specify the namespace when executing commands on the openstack namespace, set the default namespace to openstack : 2.4. Providing secure access to the Red Hat OpenStack Services on OpenShift services You must create a Secret custom resource (CR) to provide secure access to the Red Hat OpenStack Services on OpenShift (RHOSO) service pods. Warning You cannot change a service password once the control plane is deployed. If a service password is changed in osp-secret after deploying the control plane, the service is reconfigured to use the new password but the password is not updated in the Identity service (keystone). This results in a service outage. Procedure Create a Secret CR file on your workstation, for example, openstack_service_secret.yaml . Add the following initial configuration to openstack_service_secret.yaml : Replace <base64_password> with a 32-character key that is base64 encoded. You can use the following command to manually generate a base64 encoded password: Alternatively, if you are using a Linux workstation and you are generating the Secret CR definition file by using a Bash command such as cat , you can replace <base64_password> with the following command to auto-generate random passwords for each service: Replace the <base64_fernet_key> with a fernet key that is base64 encoded. You can use the following command to manually generate the fernet key: Note The HeatAuthEncryptionKey password must be a 32-character key for Orchestration service (heat) encryption. If you increase the length of the passwords for all other services, ensure that the HeatAuthEncryptionKey password remains at length 32. Create the Secret CR in the cluster: Verify that the Secret CR is created: | [
"oc get node -l \"topology.topolvm.io/node in (USD(oc get nodes -l node-role.kubernetes.io/worker -o name | cut -d '/' -f 2 | tr '\\n' ',' | sed 's/.\\{1\\}USD//'))\" -o=jsonpath='{.items[*].metadata.annotations.capacity\\.topolvm\\.io/local-storage}' | tr ' ' '\\n'",
"oc new-project openstack",
"oc get namespace openstack -ojsonpath='{.metadata.labels}' | jq { \"kubernetes.io/metadata.name\": \"openstack\", \"pod-security.kubernetes.io/enforce\": \"privileged\", \"security.openshift.io/scc.podSecurityLabelSync\": \"false\" }",
"oc label ns openstack security.openshift.io/scc.podSecurityLabelSync=false --overwrite oc label ns openstack pod-security.kubernetes.io/enforce=privileged --overwrite",
"oc project openstack",
"apiVersion: v1 data: AdminPassword: <base64_password> AodhPassword: <base64_password> AodhDatabasePassword: <base64_password> BarbicanDatabasePassword: <base64_password> BarbicanPassword: <base64_password> BarbicanSimpleCryptoKEK: <base64_fernet_key> CeilometerPassword: <base64_password> CinderDatabasePassword: <base64_password> CinderPassword: <base64_password> DatabasePassword: <base64_password> DbRootPassword: <base64_password> DesignateDatabasePassword: <base64_password> DesignatePassword: <base64_password> GlanceDatabasePassword: <base64_password> GlancePassword: <base64_password> HeatAuthEncryptionKey: <base64_password> HeatDatabasePassword: <base64_password> HeatPassword: <base64_password> IronicDatabasePassword: <base64_password> IronicInspectorDatabasePassword: <base64_password> IronicInspectorPassword: <base64_password> IronicPassword: <base64_password> KeystoneDatabasePassword: <base64_password> ManilaDatabasePassword: <base64_password> ManilaPassword: <base64_password> MetadataSecret: <base64_password> NeutronDatabasePassword: <base64_password> NeutronPassword: <base64_password> NovaAPIDatabasePassword: <base64_password> NovaAPIMessageBusPassword: <base64_password> NovaCell0DatabasePassword: <base64_password> NovaCell0MessageBusPassword: <base64_password> NovaCell1DatabasePassword: <base64_password> NovaCell1MessageBusPassword: <base64_password> NovaPassword: <base64_password> OctaviaDatabasePassword: <base64_password> OctaviaPassword: <base64_password> PlacementDatabasePassword: <base64_password> PlacementPassword: <base64_password> SwiftPassword: <base64_password> kind: Secret metadata: name: osp-secret namespace: openstack type: Opaque",
"echo -n <password> | base64",
"USD(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 | base64)",
"python3 -c \"from cryptography.fernet import Fernet; print(Fernet.generate_key().decode('UTF-8'))\" | base64",
"oc create -f openstack_service_secret.yaml -n openstack",
"oc describe secret osp-secret -n openstack"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_preparing-rhocp-for-rhoso |
Chapter 4. OADP Application backup and restore | Chapter 4. OADP Application backup and restore 4.1. Introduction to OpenShift API for Data Protection The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs). However, OADP does not serve as a disaster recovery solution for etcd or {OCP-short} Operators. OADP support is provided to customer workload namespaces, and cluster scope resources. Full cluster backup and restore are not supported. 4.1.1. OpenShift API for Data Protection APIs OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources. OADP provides the following APIs: Backup Restore Schedule BackupStorageLocation VolumeSnapshotLocation 4.1.1.1. Support for OpenShift API for Data Protection Table 4.1. Supported versions of OADP Version OCP version General availability Full support ends Maintenance ends Extended Update Support (EUS) Extended Update Support Term 2 (EUS Term 2) 1.3 4.12 4.13 4.14 4.15 29 Nov 2023 10 Jul 2024 Release of 1.5 31 Oct 2025 EUS must be on OCP 4.14 31 Oct 2026 EUS Term 2 must be on OCP 4.14 4.1.1.1.1. Unsupported versions of the OADP Operator Table 4.2. versions of the OADP Operator which are no longer supported Version General availability Full support ended Maintenance ended 1.2 14 Jun 2023 29 Nov 2023 10 Jul 2024 1.1 01 Sep 2022 14 Jun 2023 29 Nov 2023 1.0 09 Feb 2022 01 Sep 2022 14 Jun 2023 For more details about EUS, see Extended Update Support . For more details about EUS Term 2, see Extended Update Support Term 2 . Additional resources Backing up etcd 4.2. OADP release notes 4.2.1. OADP 1.3 release notes The release notes for OpenShift API for Data Protection (OADP) 1.3 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues. 4.2.1.1. OADP 1.3.6 release notes OpenShift API for Data Protection (OADP) 1.3.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.5. 4.2.1.2. OADP 1.3.5 release notes OpenShift API for Data Protection (OADP) 1.3.5 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.4. 4.2.1.3. OADP 1.3.4 release notes The OpenShift API for Data Protection (OADP) 1.3.4 release notes list resolved issues and known issues. 4.2.1.3.1. Resolved issues The backup spec.resourcepolicy.kind parameter is now case-insensitive Previously, the backup spec.resourcepolicy.kind parameter was only supported with a lower-level string. With this fix, it is now case-insensitive. OADP-2944 Use olm.maxOpenShiftVersion to prevent cluster upgrade to OCP 4.16 version The cluster operator-lifecycle-manager operator must not be upgraded between minor OpenShift Container Platform versions. Using the olm.maxOpenShiftVersion parameter prevents upgrading to OpenShift Container Platform 4.16 version when OADP 1.3 is installed. To upgrade to OpenShift Container Platform 4.16 version, upgrade OADP 1.3 on OCP 4.15 version to OADP 1.4. OADP-4803 BSL and VSL are removed from the cluster Previously, when any Data Protection Application (DPA) was modified to remove the Backup Storage Locations (BSL) or Volume Snapshot Locations (VSL) from the backupLocations or snapshotLocations section, BSL or VSL were not removed from the cluster until the DPA was deleted. With this update, BSL/VSL are removed from the cluster. OADP-3050 DPA reconciles and validates the secret key Previously, the Data Protection Application (DPA) reconciled successfully on the wrong Volume Snapshot Locations (VSL) secret key name. With this update, DPA validates the secret key name before reconciling on any VSL. OADP-3052 Velero's cloud credential permissions are now restrictive Previously, Velero's cloud credential permissions were mounted with the 0644 permissions. As a consequence, any one could read the /credentials/cloud file apart from the owner and group making it easier to access sensitive information such as storage access keys. With this update, the permissions of this file are updated to 0640, and this file cannot be accessed by other users except the owner and group. Warning is displayed when ArgoCD managed namespace is included in the backup A warning is displayed during the backup operation when ArgoCD and Velero manage the same namespace. OADP-4736 The list of security fixes that are included in this release is documented in the RHSA-2024:9960 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.4 resolved issues in Jira. 4.2.1.3.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restore After OADP restores, the Cassandra application pods might enter the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 defaultVolumesToFSBackup and defaultVolumesToFsBackup flags are not identical The dpa.spec.configuration.velero.defaultVolumesToFSBackup flag is not identical to the backup.spec.defaultVolumesToFsBackup flag, which can lead to confusion. OADP-3692 PodVolumeRestore works even though the restore is marked as failed The podvolumerestore continues the data transfer even though the restore is marked as failed. OADP-3039 Velero is unable to skip restoring of initContainer spec Velero might restore the restore-wait init container even though it is not required. OADP-3759 4.2.1.4. OADP 1.3.3 release notes The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues. 4.2.1.4.1. Resolved issues OADP fails when its namespace name is longer than 37 characters When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials secret fails. With this release, the issue has been fixed. OADP-4211 OADP image PullPolicy set to Always In versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always . This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager and Velero pods is set to IfNotPresent . The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory. For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira. 4.2.1.4.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.1.5. OADP 1.3.2 release notes The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues. 4.2.1.5.1. Resolved issues DPA fails to reconcile if a valid custom secret is used for BSL DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials initially. When the custom secret is re-created, it can be used and checked for its existence. OADP-3193 CVE-2023-45290: oadp-velero-container : Golang net/http : Memory exhaustion in Request.ParseMultipartForm A flaw was found in the net/http Golang standard library package, which impacts versions of OADP. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile , limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45290 . CVE-2023-45289: oadp-velero-container : Golang net/http/cookiejar : Incorrect forwarding of sensitive headers and cookies on HTTP redirect A flaw was found in the net/http/cookiejar Golang standard library package, which impacts versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as Authorization or Cookie . A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2023-45289 . CVE-2024-24783: oadp-velero-container : Golang crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24783 . CVE-2024-24784: oadp-velero-plugin-container : Golang net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts versions of OADP. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24784 . CVE-2024-24785: oadp-velero-container : Golang: html/template: errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts versions of OADP. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2. For more details, see CVE-2024-24785 . For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira. 4.2.1.5.2. Known issues Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.1.6. OADP 1.3.1 release notes The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues. 4.2.1.6.1. New features OADP 1.3.0 Data Mover is now fully supported The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. 4.2.1.6.2. Resolved issues IBM Cloud(R) Object Storage is now supported as a backup storage provider IBM Cloud(R) Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud(R) Object Storage is now supported as an AWS S3 compatible backup storage provider. OADP-3788 OADP operator now correctly reports the missing region error Previously, when you specified profile:default without specifying the region in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region error. OADP-3044 Custom labels are not removed from the openshift-adp namespace Previously, the openshift-adp-controller-manager pod would reset the labels attached to the openshift-adp namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp namespace. OADP-3189 OADP must-gather image collects CRDs Previously, the OADP must-gather image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg tool to extract data in the support shell. With this fix, the must-gather image now collects CRDs shipped by OADP and can use the omg tool to extract data. OADP-3229 Garbage collection has the correct description for the default frequency value Previously, the garbage-collection-frequency field had a wrong description for the default frequency value. With this update, garbage-collection-frequency has a correct value of one hour for the gc-controller reconciliation default frequency. OADP-3486 FIPS Mode flag is available in OperatorHub By setting the fips-compliant flag to true , the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled. OADP-3495 CSI plugin does not panic with a nil pointer when csiSnapshotTimeout is set to a short duration Previously, when the csiSnapshotTimeout parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot . OADP-3069 For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira. 4.2.1.6.3. Known issues Backup and storage restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power(R) and IBM Z(R) platforms: Storage Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Backup Only the backing up applications with File System Backup such as kopia and restic are supported for backup and restore operations. OADP-3787 Cassandra application pods enter in the CrashLoopBackoff status after restoring OADP After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods with any error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767 4.2.1.7. OADP 1.3.0 release notes The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues. 4.2.1.7.1. New features Velero built-in DataMover Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository. Backing up applications with File System Backup: Kopia or Restic Velero's File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path. Velero allows users to select between the two paths. For backup, specify the path during the installation through the uploader-type flag. The valid value is either restic or kopia . This field defaults to kopia if the value is not specified. The selection cannot be changed after the installation. GCP Cloud authentication Google Cloud Platform (GCP) authentication enables you to use short-lived Google credentials. GCP with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys. AWS ROSA STS authentication You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools. 4.2.1.7.2. Resolved issues ACM applications were removed and re-created on managed clusters after restore Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686 Restic restore was partially failing due to Pod Security standard During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce , which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688 Possible pod volume backup failure if Velero is installed in several namespaces There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308 OADP Velero plugins returning "received EOF, stopping recv loop" message In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176 CVE-2023-39325 Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) In releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption. For more information, see CVE-2023-39325 (Rapid Reset Attack) For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira. 4.2.1.7.3. Known issues CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration The CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference . Backup is marked as PartiallyFailed when volumeSnapshotContent CR has an error If any of the VolumeSnapshotContent CRs have an error related to removing the VolumeSnapshotBeingCreated annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed phase. OADP-2871 Performance issues when restoring 30,000 resources for the first time When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update . OADP-3071 Post restore hooks might start running before Datadownload operation has released the related PV Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC). GCP-Workload Identity Federation VSL backup PartiallyFailed VSL backup PartiallyFailed when GCP workload identity is configured on GCP. For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira. 4.2.1.7.4. Upgrade notes Note Always upgrade to the minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3. 4.2.1.7.4.1. Changes from OADP 1.2 to 1.3 The Velero server has been updated from version 1.11 to 1.12. OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover. This changes the following: The spec.features.dataMover field and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from the DataProtectionApplication (DPA) configuration. The Volsync Operator is no longer required for Data Mover functionality, and you can remove it. The custom resource definitions volumesnapshotbackups.datamover.oadp.openshift.io and volumesnapshotrestores.datamover.oadp.openshift.io are no longer required, and you can remove them. The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them. OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic. To employ Kopia, use the new spec.configuration.nodeAgent field as shown in the following example: Example spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... The spec.configuration.restic field is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove the restic key and its values, and use the following new syntax: Example spec: configuration: nodeAgent: enable: true uploaderType: restic # ... Note In a future OADP release, it is planned that the kopia tool will become the default uploaderType value. 4.2.1.7.4.2. Upgrading from OADP 1.2 Technology Preview Data Mover OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3: Procedure If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup. If you require off cluster backups: Back up the applications with a file system backup that uses the --default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackup options. Back up the applications with your object storage plugins, for example, velero-plugin-for-aws . Note The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours. Important To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2. 4.2.1.7.4.3. Backing up the DPA configuration You must back up your current DataProtectionApplication (DPA) configuration. Procedure Save your current DPA configuration by running the following command: Example USD oc get dpa -n openshift-adp -o yaml > dpa.orig.backup 4.2.1.7.4.4. Upgrading the OADP Operator Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator. Procedure Change your subscription channel for the OADP Operator from stable-1.2 to stable-1.3 . Allow time for the Operator and containers to update and restart. Additional resources Updating installed Operators 4.2.1.7.4.5. Converting DPA to the new version If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication (DPA) manifest as follows. Procedure Click Operators Installed Operators and select the OADP Operator. In the Provided APIs section, click View more . Click Create instance in the DataProtectionApplication box. Click YAML View to display the current DPA parameters. Example current DPA spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift # ... Update the DPA parameters: Remove the features.dataMover key and values from the DPA. Remove the VolumeSnapshotMover (VSM) plugin. Add the nodeAgent key and values. Example updated DPA spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift # ... Wait for the DPA to reconcile successfully. 4.2.1.7.4.6. Verifying the upgrade Use the following procedure to verify the upgrade. Procedure Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication (DPA) configuration. Example USD velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true Example apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s # ... 4.3. OADP performance 4.3.1. OADP recommended network settings For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across {OCP-short} nodes, S3 storage, and in supported cloud environments that meet {OCP-short} network requirement recommendations. To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions: Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups. Packet loss: 1% Packet corruption: 1% Latency: 100ms Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements. Important Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds. 4.4. OADP features and plugins OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications. The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources. 4.4.1. OADP features OpenShift API for Data Protection (OADP) supports the following features: Backup You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label. OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Restore You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label. Note You must exclude Operators from the backup of an application for backup and restore to succeed. Schedule You can schedule backups at specified intervals. Hooks You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container. 4.4.2. OADP plugins The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins. OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots. Table 4.3. OADP plugins OADP plugin Function Storage location aws Backs up and restores Kubernetes objects. AWS S3 Backs up and restores volumes with snapshots. AWS EBS azure Backs up and restores Kubernetes objects. Microsoft Azure Blob storage Backs up and restores volumes with snapshots. Microsoft Azure Managed Disks gcp Backs up and restores Kubernetes objects. Google Cloud Storage Backs up and restores volumes with snapshots. Google Compute Engine Disks openshift Backs up and restores OpenShift Container Platform resources. [1] Object store kubevirt Backs up and restores OpenShift Virtualization resources. [2] Object store csi Backs up and restores volumes with CSI snapshots. [3] Cloud storage that supports CSI snapshots vsm VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] Object store Mandatory. Virtual machine disks are backed up with CSI snapshots or Restic. The csi plugin uses the Kubernetes CSI snapshot API. OADP 1.1 or later uses snapshot.storage.k8s.io/v1 OADP 1.0 uses snapshot.storage.k8s.io/v1beta1 OADP 1.2 only. 4.4.3. About OADP Velero plugins You can configure two types of plugins when you install Velero: Default cloud provider plugins Custom plugins Both types of plugin are optional, but most users configure at least one cloud provider plugin. 4.4.3.1. Default Velero cloud provider plugins You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment: aws (Amazon Web Services) gcp (Google Cloud Platform) azure (Microsoft Azure) openshift (OpenShift Velero plugin) csi (Container Storage Interface) kubevirt (KubeVirt) You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the openshift , aws , azure , and gcp plugins: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp 4.4.3.2. Custom Velero plugins You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment. You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment. Example file The following .yaml file installs the default openshift , azure , and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin : apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin 4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. 4.4.4. Supported architectures for OADP OpenShift API for Data Protection (OADP) supports the following architectures: AMD64 ARM64 PPC64le s390x Note OADP 1.2.0 and later versions support the ARM64 architecture. 4.4.5. OADP support for IBM Power and IBM Z OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power(R) and to IBM Z(R). OADP 1.1.7 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.1.7 in terms of backup locations for these systems. OADP 1.2.3 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.2.3 in terms of backup locations for these systems. OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power(R) and IBM Z(R). The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems. 4.4.5.1. OADP support for target backup locations using IBM Power IBM Power(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.12, 4.13. 4.14, and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Power(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power(R) with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2. OADP testing and support for target backup locations using IBM Z IBM Z(R) running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.12, 4.13, 4.14 and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well. IBM Z(R) running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z(R) with OpenShift Container Platform 4.12, 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well. 4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue. 4.4.6. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.4.6.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.4.6.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.5. OADP use cases 4.5.1. Backup using OpenShift API for Data Protection and {odf-first} Following is a use case for using OADP and {odf-short} to back up an application. 4.5.1.1. Backing up an application using OADP and {odf-short} In this use case, you back up an application by using OADP and store the backup in an object storage provided by {odf-first}. You create a object bucket claim (OBC) to configure the backup storage location. You use {odf-short} to configure an Amazon S3-compatible object storage bucket. {odf-short} provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location. You use the NooBaa MCG service with OADP by using the aws provider plugin. You configure the Data Protection Application (DPA) with the backup storage location (BSL). You create a backup custom resource (CR) and specify the application namespace to back up. You create and verify the backup. Prerequisites You installed the OADP Operator. You installed the {odf-short} Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example: Example OBC apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 The name of the object bucket claim. 2 The name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> 1 1 Specify the file name of the object bucket claim manifest. When you create an OBC, {odf-short} creates a secret and a config map with the same name as the object bucket claim. The secret has the bucket credentials, and the config map has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 test-obc is the name of the OBC. Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the generated secret , run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Get the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace by running the following command: USD oc get route s3 -n openshift-storage Create a cloud-credentials file with the object bucket credentials as shown in the following command: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content as shown in the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Configure the Data Protection Application (DPA) as shown in the following example: Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp 1 Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. 2 This is the S3 URL of {odf-short} storage. 3 Specify the bucket name. Create the DPA by running the following command: USD oc apply -f <dpa_filename> Verify that the DPA is created successfully by running the following command. In the example output, you can see the status object has type field set to Reconciled . This means, the DPA is successfully created. USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output. USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.5.2. OpenShift API for Data Protection (OADP) restore use case Following is a use case for using OADP to restore a backup to a different namespace. 4.5.2.1. Restoring an application to a different namespace using OADP Restore a backup of an application by using OADP to a new target namespace, test-restore-application . To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources. Prerequisites You installed the OADP Operator. You have the backup of an application to be restored. Procedure Create a restore CR as shown in the following example: Example restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3 1 The name of the restore CR. 2 Specify the name of the backup. 3 namespaceMapping maps the source application namespace to the target application namespace. Specify the application namespace that you backed up. test-restore-application is the target namespace where you want to restore the backup. Apply the restore CR by running the following command: USD oc apply -f <restore_cr_filename> Verification Verify that the restore is in the Completed phase by running the following command: USD oc describe restores.velero.io <restore_name> -n openshift-adp Change to the restored namespace test-restore-application by running the following command: USD oc project test-restore-application Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command: USD oc get pvc,svc,deployment,secret,configmap Example output NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s 4.5.3. Including a self-signed CA certificate during backup You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by {odf-first}. 4.5.3.1. Backing up an application and its self-signed CA certificate The s3.openshift-storage.svc service, provided by {odf-short}, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA. To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks: Request a NooBaa bucket by creating an object bucket claim (OBC). Extract the bucket details. Include a self-signed CA certificate in the DataProtectionApplication CR. Back up an application. Prerequisites You installed the OADP Operator. You installed the {odf-short} Operator. You have an application with a database running in a separate namespace. Procedure Create an OBC manifest to request a NooBaa bucket as shown in the following example: Example ObjectBucketClaim CR apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2 1 Specifies the name of the object bucket claim. 2 Specifies the name of the bucket. Create the OBC by running the following command: USD oc create -f <obc_file_name> When you create an OBC, {odf-short} creates a secret and a ConfigMap with the same name as the object bucket claim. The secret object contains the bucket credentials, and the ConfigMap object contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command: USD oc extract --to=- cm/test-obc 1 1 The name of the OBC is test-obc . Example output # BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc To get the bucket credentials from the secret object, run the following command: USD oc extract --to=- secret/test-obc Example output # AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym Create a cloud-credentials file with the object bucket credentials by using the following example configuration: [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create the cloud-credentials secret with the cloud-credentials file content by running the following command: USD oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials Extract the service CA certificate from the openshift-service-ca.crt config map by running the following command. Ensure that you encode the certificate in Base64 format and note the value to use in the step. USD oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo Example output LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K Configure the DataProtectionApplication CR manifest file with the bucket name and CA certificate as shown in the following example: Example DataProtectionApplication CR apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "false" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3 1 The insecureSkipTLSVerify flag can be set to either true or false . If set to "true", SSL/TLS security is disabled. If set to false , SSL/TLS security is enabled. 2 Specify the name of the bucket extracted in an earlier step. 3 Copy and paste the Base64 encoded certificate from the step. Create the DataProtectionApplication CR by running the following command: USD oc apply -f <dpa_filename> Verify that the DataProtectionApplication CR is created successfully by running the following command: USD oc get dpa -o yaml Example output apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" Verify that the backup storage location (BSL) is available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true Configure the Backup CR by using the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 1 Specify the namespace for the application to back up. Create the Backup CR by running the following command: USD oc apply -f <backup_cr_filename> Verification Verify that the Backup object is in the Completed phase by running the following command: USD oc describe backup test-backup -n openshift-adp Example output Name: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none> 4.6. Installing and configuring OADP 4.6.1. About installing OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types: Amazon Web Services Microsoft Azure Google Cloud Platform Multicloud Object Gateway AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO You can configure multiple backup storage locations within the same namespace for each individual OADP deployment. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB). To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage. You create a default Secret and then you install the Data Protection Application. 4.6.1.1. AWS S3 compatible backup storage providers OADP is compatible with many object storage providers for use with different backup and snapshot operations. Several object storage providers are fully supported, several are unsupported but known to work, and some have known limitations. 4.6.1.1.1. Supported backup storage providers The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations: MinIO Multicloud Object Gateway (MCG) Amazon Web Services (AWS) S3 IBM Cloud(R) Object Storage S3 Ceph RADOS Gateway (Ceph Object Gateway) Red Hat Container Storage {odf-full} Google Cloud Platform (GCP) Microsoft Azure Note Google Cloud Platform (GCP) and Microsoft Azure have their own Velero object store plugins. 4.6.1.1.2. Unsupported backup storage providers The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat: IBM Cloud Oracle Cloud DigitalOcean NooBaa, unless installed using Multicloud Object Gateway (MCG) Tencent Cloud Ceph RADOS v12.2.7 Quobyte Cloudian HyperStore Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . 4.6.1.1.3. Backup storage providers with known limitations The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set: Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore. 4.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store. Warning Failure to configure MCG as an external object store might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud . Additional resources Overview of backup and snapshot locations in the Velero documentation 4.6.1.3. About OADP update channels When you install an OADP Operator, you choose an update channel . This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time. The following update channels are available: The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for OADP.v1.1.z and older versions from OADP.v1.0.z . The stable-1.0 channel is deprecated and is not supported. The stable-1.1 channel is deprecated and is not supported. The stable-1.2 channel is deprecated and is not supported. The stable-1.3 channel contains OADP.v1.3.z , the most recent OADP 1.3 ClusterServiceVersion . The stable-1.4 channel contains OADP.v1.4.z , the most recent OADP 1.4 ClusterServiceVersion . For more information, see OpenShift Operator Life Cycles . Which update channel is right for you? The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from OADP.v1.1.z . Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z. When must you switch update channels? If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z. If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z. If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z. Note You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it. 4.6.1.4. Installation of OADP on multiple namespaces You can install OpenShift API for Data Protection (OADP) into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI). You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements: All deployments of OADP on the same cluster must be the same version, for example, 1.1.4. Installing different versions of OADP on the same cluster is not supported. Each individual deployment of OADP must have a unique set of credentials and at least one BackupStorageLocation configuration. You can also use multiple BackupStorageLocation configurations within the same namespace. By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to review security and RBAC settings carefully and make any necessary changes to them to ensure that each OADP instance has the correct permissions. Additional resources Cluster service version 4.6.1.5. Velero CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources. 4.6.1.5.1. CPU and memory requirement for configurations Configuration types [1] Average usage [2] Large usage resourceTimeouts CSI Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi N/A Restic [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi 900m [5] Data Mover N/A N/A 10m - average usage 60m - large usage Average usage - use these settings for most usage situations. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations. Increasing the CPU has a significant impact on improving backup and restore times. Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m. Note The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above. 4.6.1.5.2. NodeAgent CPU for large usage Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP). Important It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia's aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. You can set these limits in Ceph MDS pods by following the procedure in Changing the CPU and memory resources on the rook-ceph pods . You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: resources: mds: limits: cpu: "3" memory: 128Gi requests: cpu: "3" memory: 8Gi 4.6.2. Installing the OADP Operator You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.13 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.14 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 4.6.2.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 1.3.0 1.12 4.10 - 4.15 1.3.1 1.12 4.10 - 4.15 1.3.2 1.12 4.10 - 4.15 1.3.3 1.12 4.10 - 4.15 1.4.0 1.14 4.14 and later 1.4.1 1.14 4.14 and later 4.6.3. Configuring the OpenShift API for Data Protection with Amazon Web Services You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.3.1. Configuring Amazon Web Services You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the AWS CLI installed. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application. 4.6.3.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; {odf-full}; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.3.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.3.2.2. Creating profiles for different credentials If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file. Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR). Procedure Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example: [backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> Create a Secret object with the credentials-velero file: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1 Add the profiles to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: "backupStorage" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: "volumeSnapshot" 4.6.3.2.3. Configuring the backup storage location using AWS You can configure the AWS backup storage location (BSL) as shown in the following example procedure. Prerequisites You have created an object storage bucket using AWS. You have installed the OADP Operator. Procedure Configure the BSL custom resource (CR) with values as applicable to your use case. Backup storage location apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: "true" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: "50..c-4da1-419f-a16e-ei...49f" 12 customerKeyEncryptionFile: "/credentials/customer-key" 13 signatureVersion: "1" 14 profile: "default" 15 insecureSkipTLSVerify: "true" 16 enableSharedConfig: "true" 17 tagging: "" 18 checksumAlgorithm: "CRC32" 19 1 1 The name of the object store plugin. In this example, the plugin is aws . This field is required. 2 The name of the bucket in which to store backups. This field is required. 3 The prefix within the bucket in which to store backups. This field is optional. 4 The credentials for the backup storage location. You can set custom credentials. If custom credentials are not set, the default credentials' secret is used. 5 The key within the secret credentials' data. 6 The name of the secret containing the credentials. 7 The AWS region where the bucket is located. Optional if s3ForcePathStyle is false. 8 A boolean flag to decide whether to use path-style addressing instead of virtual hosted bucket addressing. Set to true if using a storage service such as MinIO or NooBaa. This is an optional field. The default value is false . 9 You can specify the AWS S3 URL here for explicitness. This field is primarily for storage services such as MinIO or NooBaa. This is an optional field. 10 This field is primarily used for storage services such as MinIO or NooBaa. This is an optional field. 11 The name of the server-side encryption algorithm to use for uploading objects, for example, AES256 . This is an optional field. 12 Specify an AWS KMS key ID. You can format, as shown in the example, as an alias, such as alias/<KMS-key-alias-name> , or the full ARN to enable encryption of the backups stored in S3. Note that kmsKeyId cannot be used in with customerKeyEncryptionFile . This is an optional field. 13 Specify the file that has the SSE-C customer key to enable customer key encryption of the backups stored in S3. The file must contain a 32-byte string. The customerKeyEncryptionFile field points to a mounted secret within the velero container. Add the following key-value pair to the velero cloud-credentials secret: customer-key: <your_b64_encoded_32byte_string> . Note that the customerKeyEncryptionFile field cannot be used with the kmsKeyId field. The default value is an empty string ( "" ), which means SSE-C is disabled. This is an optional field. 14 The version of the signature algorithm used to create signed URLs. You use signed URLs to download the backups, or fetch the logs. Valid values are 1 and 4 . The default version is 4 . This is an optional field. 15 The name of the AWS profile in the credentials file. The default value is default . This is an optional field. 16 Set the insecureSkipTLSVerify field to true if you do not want to verify the TLS certificate when connecting to the object store, for example, for self-signed certificates with MinIO. Setting to true is susceptible to man-in-the-middle attacks and is not recommended for production workloads. The default value is false . This is an optional field. 17 Set the enableSharedConfig field to true if you want to load the credentials file as a shared config file. The default value is false . This is an optional field. 18 Specify the tags to annotate the AWS S3 objects. Specify the tags in key-value pairs. The default value is an empty string ( "" ). This is an optional field. 19 Specify the checksum algorithm to use for uploading objects to S3. The supported values are: CRC32 , CRC32C , SHA1 , and SHA256 . If you set the field as an empty string ( "" ), the checksum check will be skipped. The default value is CRC32 . This is an optional field. 4.6.3.2.4. Creating an OADP SSE-C encryption key for additional data security Amazon Web Services (AWS) S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption. The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption. You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed. Warning Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key. Prerequisites To make OADP mount a secret that contains your SSE-C key to the Velero pod at /credentials , use the following default secret name for AWS: cloud-credentials , and leave at least one of the following labels empty: dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credential This is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971 . Note The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting. If you need the backup location to have credentials with a different name than cloud-credentials , you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the example does not contain a credential name, the snapshot location will use cloud-credentials as its secret for taking snapshots. Example snapshot location in a DPA without credentials specified snapshotLocations: - velero: config: profile: default region: <region> provider: aws # ... Procedure Create an SSE-C encryption key: Generate a random number and save it as a file named sse.key by running the following command: USD dd if=/dev/urandom bs=1 count=32 > sse.key Encode the sse.key by using Base64 and save the result as a file named sse_encoded.key by running the following command: USD cat sse.key | base64 > sse_encoded.key Link the file named sse_encoded.key to a new file named customer-key by running the following command: USD ln -s sse_encoded.key customer-key Create an OpenShift Container Platform secret: If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command: USD oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key If you are updating an existing installation, edit the values of the cloud-credential secret block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret # ... Edit the value of the customerKeyEncryptionFile attribute in the backupLocations block of the DataProtectionApplication CR manifest, as in the following example: spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default # ... Warning You must restart the Velero pod to remount the secret credentials properly on an existing installation. The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key. Verification To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it. Create a test file by running the following command: USD echo "encrypt me please" > test.txt Upload the test file by running the following command: USD aws s3api put-object \ --bucket <bucket> \ --key test.txt \ --body test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 Try to download the file. In either the Amazon web console or the terminal, run the following command: USD s3cmd get s3://<bucket>/test.txt test.txt The download fails because the file is encrypted with an additional key. Download the file with the additional encryption key by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key test.txt \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ downloaded.txt Read the file contents by running the following command: USD cat downloaded.txt Example output encrypt me please Additional resources You can also download the file with the additional encryption key backed up with Velcro by running a different command. See Downloading a file with an SSE-C encryption key for files backed up by Velero . 4.6.3.2.4.1. Downloading a file with an SSE-C encryption key for files backed up by Velero When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velcro. Procedure Download the file with the additional encryption key for files backed up by Velero by running the following command: USD aws s3api get-object \ --bucket <bucket> \ --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz \ --sse-customer-key fileb://sse.key \ --sse-customer-algorithm AES256 \ --debug \ velero_download.tar.gz 4.6.3.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.3.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.3.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.3.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.3.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials , which contains separate profiles for the backup and snapshot location credentials. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: "default" s3ForcePathStyle: "true" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: "default" credential: key: cloud name: cloud-credentials 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 9 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 10 Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage. 11 Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage. 12 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 13 Specify a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs. 14 The snapshot location must be in the same region as the PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the credentials-velero file. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.3.4.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.3.5. Configuring the backup storage location with a MD5 checksum algorithm You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA. CRC32 CRC32C SHA1 SHA256 Note You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32 . Prerequisites You have installed the OADP Operator. You have configured Amazon S3, or S3-compatible object storage as a backup location. Procedure Configure the BSL in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: "" 1 insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi 1 Specify the checksumAlgorithm . In this example, the checksumAlgorithm field is set to an empty value. You can select an option from the following list: CRC32 , CRC32C , SHA1 , SHA256 . Important If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration. The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method. 4.6.3.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.3.7. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.3.7.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.3.7.2. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . 4.6.4. Configuring the OpenShift API for Data Protection with IBM Cloud You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups. 4.6.4.1. Configuring the COS instance You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials. Prerequisites You have an IBM Cloud Platform account. You installed the IBM Cloud CLI . You are logged in to IBM Cloud. Procedure Install the IBM Cloud Object Storage (COS) plugin by running the following command: USD ibmcloud plugin install cos -f Set a bucket name by running the following command: USD BUCKET=<bucket_name> Set a bucket region by running the following command: USD REGION=<bucket_region> 1 1 Specify the bucket region, for example, eu-gb . Create a resource group by running the following command: USD ibmcloud resource group-create <resource_group_name> Set the target resource group by running the following command: USD ibmcloud target -g <resource_group_name> Verify that the target resource group is correctly set by running the following command: USD ibmcloud target Example output API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default In the example output, the resource group is set to Default . Set a resource group name by running the following command: USD RESOURCE_GROUP=<resource_group> 1 1 Specify the resource group name, for example, "default" . Create an IBM Cloud service-instance resource by running the following command: USD ibmcloud resource service-instance-create \ <service_instance_name> \ 1 <service_name> \ 2 <service_plan> \ 3 <region_name> 4 1 Specify a name for the service-instance resource. 2 Specify the service name. Alternatively, you can specify a service ID. 3 Specify the service plan for your IBM Cloud account. 4 Specify the region name. Example command USD ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ 1 standard \ global \ -d premium-global-deployment 2 1 The service name is cloud-object-storage . 2 The -d flag specifies the deployment name. Extract the service instance ID by running the following command: USD SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id') Create a COS bucket by running the following command: USD ibmcloud cos bucket-create \// --bucket USDBUCKET \// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \// --region USDREGION Variables such as USDBUCKET , USDSERVICE_INSTANCE_ID , and USDREGION are replaced by the values you set previously. Create HMAC credentials by running the following command. USD ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true} Extract the access key ID and the secret access key from the HMAC credentials and save them in the credentials-velero file. You can use the credentials-velero file to create a secret for the backup storage location. Run the following command: USD cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__ 4.6.4.2. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.4.3. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.4.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5 1 The provider is aws when you use IBM Cloud as a backup storage location. 2 Specify the IBM Cloud Object Storage (COS) bucket name. 3 Specify the COS region name, for example, eu-gb . 4 Specify the S3 URL of the COS bucket. For example, http://s3.eu-gb.cloud-object-storage.appdomain.cloud . Here, eu-gb is the region name. Replace the region name according to your bucket region. 5 Defines the name of the secret you created by using the access key and the secret access key from the HMAC credentials. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.4.5. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.6.4.6. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.4.7. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.4.8. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.4.9. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". 4.6.5. Configuring the OpenShift API for Data Protection with Microsoft Azure You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Azure for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.5.1. Configuring Microsoft Azure You configure Microsoft Azure for OpenShift API for Data Protection (OADP). Prerequisites You must have the Azure CLI installed. Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools. This identity is used for access to resources. Create a service principal Sign in using a service principal and password Sign in using a service principal and certificate Manage service principal roles Create an Azure resource using a service principal Reset service principal credentials For more details, see Create an Azure service principal with Azure CLI . 4.6.5.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; {odf-full}; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.5.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-azure . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.5.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-azure . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" provider: azure 1 Backup location Secret with custom name. 4.6.5.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.5.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.5.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.5.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.5.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: "true" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Specify the Azure resource group. 9 Specify the Azure storage account ID. 10 Specify the Azure subscription ID. 11 If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 14 You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. 15 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-azure , is used. If you specify a custom name, the custom name is used for the backup location. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.5.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.5.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.5.5.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.5.5.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.6. Configuring the OpenShift API for Data Protection with Google Cloud Platform You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure GCP for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.6.1. Configuring Google Cloud Platform You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application. 4.6.6.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; {odf-full}; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.6.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials-gcp . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.6.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials-gcp . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 1 Backup location Secret with custom name. 4.6.6.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.6.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.6.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.6.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.6.4. Google workload identity federation cloud authentication Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed. With Google's workload identity federation you can use Identity and Access Management (IAM) to offer external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys. Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application. When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots. OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues . If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application . Prerequisites You have installed a cluster in manual mode with GCP Workload Identity configured . You have access to the Cloud Credential Operator utility ( ccoctl ) and to the associated workload identity pool. Procedure Create an oadp-credrequest directory by running the following command: USD mkdir -p oadp-credrequest Create a CredentialsRequest.yaml file as following: echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml Use the ccoctl utility to process the CredentialsRequest objects in the oadp-credrequest directory by running the following command: USD ccoctl gcp create-service-accounts \ --name=<name> \ --project=<gcp_project_id> \ --credentials-requests-dir=oadp-credrequest \ --workload-identity-pool=<pool_id> \ --workload-identity-provider=<provider_id> The manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml file is now available to use in the following steps. Create a namespace by running the following command: USD oc create namespace <OPERATOR_INSTALL_NS> Apply the credentials to the namespace by running the following command: USD oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml 4.6.6.4.1. Google workload identity federation known issues Volume Snapshot Location (VSL) backups finish with a PartiallyFailed phase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups. 4.6.6.5. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The openshift plugin is mandatory. 3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 4 The administrative agent that routes the administrative requests to servers. 5 Set this value to true if you want to enable nodeAgent and perform File System Backup. 6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 8 Secret key that contains credentials. For Google workload identity federation cloud authentication use service_account.json . 9 Secret name that contains credentials. If you do not specify this value, the default name, cloud-credentials-gcp , is used. 10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 11 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. 12 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs. 13 The snapshot location must be in the same region as the PVs. 14 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-gcp , is used. If you specify a custom name, the custom name is used for the backup location. 15 Google workload identity federation supports internal image backup. Set this field to false if you do not want to use image backup. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.6.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.6.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.6.6.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.6.6.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.7. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure Multicloud Object Gateway as a backup location. MCG is a component of OpenShift Data Foundation. You configure MCG as a backup location in the DataProtectionApplication custom resource (CR). Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.7.1. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. Create a credentials-velero file: USD cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF You use the credentials-velero file to create a Secret object when you install the Data Protection Application. 4.6.7.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; {odf-full}; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 4.6.7.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials . Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.7.2.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: config: profile: "default" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Specify the region, following the naming convention of the documentation of your object storage server. 2 Backup location Secret with custom name. 4.6.7.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.7.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.7.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.7.3.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.7.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: "default" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 The openshift plugin is mandatory. 4 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 5 The administrative agent that routes the administrative requests to servers. 6 Set this value to true if you want to enable nodeAgent and perform File System Backup. 7 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 8 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 9 Specify the region, following the naming convention of the documentation of your object storage server. 10 Specify the URL of the S3 endpoint. 11 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials , is used. If you specify a custom name, the custom name is used for the backup location. 12 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 13 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.7.5. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.7.5.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.7.5.2. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.7.5.3. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Performance tuning guide for Multicloud Object Gateway . Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.8. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application. Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location. Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You create a Secret for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks . 4.6.8.1. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; {odf-full}; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. Additional resources Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.1.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. The default name of the Secret is cloud-credentials , unless your backup storage provider has a default plugin, such as aws , azure , or gcp . In that case, the default name is specified in the provider-specific OADP installation procedure. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 4.6.8.1.2. Creating secrets for different credentials If your backup and snapshot locations use different credentials, you must create two Secret objects: Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR). Snapshot location Secret with the default name, cloud-credentials . This Secret is not specified in the DataProtectionApplication CR. Procedure Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider. Create a Secret for the snapshot location with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero Create a credentials-velero file for the backup location in the appropriate format for your object storage. Create a Secret for the backup location with a custom name: USD oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: ... backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> 1 Backup location Secret with custom name. 4.6.8.2. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 4.6.8.2.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. For more details, see Configuring node agents and node labels . 4.6.8.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to {odf-first}. If working with {odf-short}, consult the appropriate tuning guides for official recommendations. 4.6.8.2.1.1.1. CPU and memory requirement for configurations Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested: Configuration types Request Max limit CPU Request changed to 3 Max limit to 3 Memory Request changed to 8 Gi Max limit to 128 Gi 4.6.8.2.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 4.6.8.2.2.1. Using CA certificates with the velero command aliased for Velero deployment You might want to use the Velero CLI without installing it locally on your system by creating an alias for it. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. To use an aliased Velero command, run the following command: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' Check that the alias is working by running the following command: Example USD velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands: USD CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') USD [[ -n USDCA_CERT ]] && echo "USDCA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert" USD velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt To fetch the backup logs, run the following command: USD velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt> You can use these logs to view failures and warnings for the resources that you cannot back up. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the step. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command: USD oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required. 4.6.8.3. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. 3 Optional: The kubevirt plugin is used with OpenShift Virtualization. 4 Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 4.6.8.4. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.8.4.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.8.4.2. Creating an Object Bucket Claim for disaster recovery on OpenShift Data Foundation If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console. Warning Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. Note Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications . Procedure Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console . 4.6.8.4.3. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 4.6.8.4.4. Disabling the node agent in DataProtectionApplication If you are not using Restic , Kopia , or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent , ensure the OADP Operator is idle and not running any backups. Procedure To disable the nodeAgent , set the enable flag to false . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: false 1 uploaderType: kopia # ... 1 Disables the node agent. To enable the nodeAgent , set the enable flag to true . See the following example: Example DataProtectionApplication CR # ... configuration: nodeAgent: enable: true 1 uploaderType: kopia # ... 1 Enables the node agent. You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs". Additional resources Installing the Data Protection Application with the kubevirt and openshift plugins Running tasks in pods using jobs . Configuring the OpenShift API for Data Protection (OADP) with multiple backup storage locations 4.6.9. Configuring the OpenShift API for Data Protection with OpenShift Virtualization You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application. Back up and restore virtual machines by using the OpenShift API for Data Protection . Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backups and restores For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 4.6.9.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 4.6.9.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true Warning If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots. 4.6.9.3. Backing up a single VM If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label. Prerequisites You have installed the OADP Operator. You have multiple VMs running in a namespace. You have added the kubevirt plugin in the DataProtectionApplication (DPA) custom resource (CR). You have configured the BackupStorageLocation CR in the DataProtectionApplication CR and BackupStorageLocation is available. Procedure Configure the Backup CR as shown in the following example: Example Backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3 1 Specify the name of the namespace where you have created the VMs. 2 Specify the VM name that needs to be backed up. 3 Specify the name of the BackupStorageLocation CR. To create a Backup CR, run the following command: USD oc apply -f <backup_cr_file_name> 1 1 Specify the name of the Backup CR file. 4.6.9.4. Restoring a single VM After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM. Prerequisites You have installed the OADP Operator. You have backed up a single VM by using the label selector. Procedure Configure the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true 1 Specifies the name of the backup of a single VM. To restore the single VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.5. Restoring a single VM from a backup of multiple VMs If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM. Prerequisites You have installed the OADP Operator. You have labeled the VMs that need to be backed up. You have a backup of multiple VMs. Procedure Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command: USD oc label vm <vm_name> app=<vm_name> -n openshift-adp Configure the label selectors in the Restore CR as shown in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2 1 Specify the UID of DataVolume of the VM that you want to restore. For example, b6... 53a-ddd7-4d9d-9407-a0c... e5 . 2 Specify the name of the VM that you want to restore. For example, test-vm . To restore a VM, run the following command: USD oc apply -f <restore_cr_file_name> 1 1 Specify the name of the Restore CR file. 4.6.9.6. Configuring the DPA with client burst and QPS settings The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second. You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values. Prerequisites You have installed the OADP Operator. Procedure Configure the client-burst and the client-qps fields in the DPA as shown in the following example: Example Data Protection Application apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: "true" profile: "default" region: <bucket_region> s3ForcePathStyle: "true" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt 1 Specify the client-burst value. In this example, the client-burst field is set to 500. 2 Specify the client-qps value. In this example, the client-qps field is set to 300. 4.6.9.6.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" 4.6.9.7. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.4. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.5. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . Important Red Hat only supports the combination of OADP versions 1.3.0 and later, and OpenShift Virtualization versions 4.14 and later. OADP versions before 1.3.0 are not supported for back up and restore of OpenShift Virtualization. 4.6.10. Configuring the OpenShift API for Data Protection (OADP) with more than one Backup Storage Location You can configure one or more backup storage locations (BSLs) in the Data Protection Application (DPA). You can also select the location to store the backup in when you create the backup. With this configuration, you can store your backups in the following ways: To different regions To a different storage provider OADP supports multiple credentials for configuring more than one BSL, so that you can specify the credentials to use with any BSL. 4.6.10.1. Configuring the DPA with more than one BSL You can configure the DPA with more than one BSL and specify the credentials provided by the cloud provider. Prerequisites You must install the OADP Operator. You must create the secrets by using the credentials provided by the cloud provider. Procedure Configure the DPA with more than one BSL. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: "default" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: "default" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" credential: key: cloud name: <custom_secret_name_odf> 9 #... 1 Specify a name for the first BSL. 2 This parameter indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR , the default BSL is used. You can set only one BSL as the default. 3 Specify the bucket name. 4 Specify a prefix for Velero backups; for example, velero . 5 Specify the AWS region for the bucket. 6 Specify the name of the default Secret object that you created. 7 Specify a name for the second BSL. 8 Specify the URL of the S3 endpoint. 9 Specify the correct name for the Secret ; for example, custom_secret_name_odf . If you do not specify a Secret name, the default name is used. Specify the BSL to be used in the backup CR. See the following example. Example backup CR apiVersion: velero.io/v1 kind: Backup # ... spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true 1 Specify the namespace to back up. 2 Specify the storage location. 4.6.10.2. OADP use case for two BSLs In this use case, you configure the DPA with two storage locations by using two cloud credentials. You back up an application with a database by using the default BSL. OADP stores the backup resources in the default BSL. You then backup the application again by using the second BSL. Prerequisites You must install the OADP Operator. You must configure two backup storage locations: AWS S3 and Multicloud Object Gateway (MCG). You must have an application with a database deployed on a Red Hat OpenShift cluster. Procedure Create the first Secret for the AWS S3 storage provider with the default name by running the following command: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1 1 Specify the name of the cloud credentials file for AWS S3. Create the second Secret for MCG with a custom name by running the following command: USD oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1 1 Specify the name of the cloud credentials file for MCG. Note the name of the mcg-secret custom secret. Configure the DPA with the two BSLs as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: "true" profile: noobaa region: <region_name> 3 s3ForcePathStyle: "true" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws 1 Specify the AWS region for the bucket. 2 Specify the AWS S3 bucket name. 3 Specify the region, following the naming convention of the documentation of MCG. 4 Specify the URL of the S3 endpoint for MCG. 5 Specify the name of the custom secret for MCG storage. 6 Specify the MCG bucket name. Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Verify that the BSLs are available by running the following command: USD oc get bsl Example output NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s Create a backup CR with the default BSL. Note In the following example, the storageLocation field is not specified in the backup CR. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the default BSL by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Create a backup CR by using MCG as the BSL. In the following example, note that the second storageLocation value is specified at the time of backup CR creation. Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. 2 Specify the second storage location. Create a second backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed with the storage location as MCG by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Additional resources Creating profiles for different credentials 4.6.11. Configuring the OpenShift API for Data Protection (OADP) with more than one Volume Snapshot Location You can configure one or more Volume Snapshot Locations (VSLs) to store the snapshots in different cloud provider regions. 4.6.11.1. Configuring the DPA with more than one VSL You configure the DPA with more than one VSL and specify the credentials provided by the cloud provider. Make sure that you configure the snapshot location in the same region as the persistent volumes. See the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #... 1 Specify the region. The snapshot location must be in the same region as the persistent volumes. 2 Specify the custom credential name. 4.7. Uninstalling OADP 4.7.1. Uninstalling the OpenShift API for Data Protection You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 4.8. OADP backing up 4.8.1. Backing up applications Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule. You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR . The Backup CR creates backup files for Kubernetes resources and internal images on S3 object storage. If your cloud provider has a native snapshot API or supports CSI snapshots, the Backup CR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots . For more information about CSI volume snapshots, see CSI volume snapshots . Important The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation . The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Kopia or Restic. See Backing up applications with File System Backup: Kopia or Restic . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Important The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software. 4.8.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks . You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups using Schedule CR . 4.8.1.2. Known issues OpenShift Container Platform 4.14 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process. This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases. Additional resources Installing Operators on clusters for administrators Installing Operators in namespaces for non-administrators 4.8.2. Creating a Backup CR You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. 6 Map of {key,value} pairs of backup resources that have all the specified labels. 7 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}' 4.8.3. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots . For more information, see Creating a Backup CR . Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: Example configuration file apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3 1 Must be set to true . 2 Must be set to true . 3 OADP supports the Retain and Delete deletion policy types for CSI and Data Mover backup and restore. For the OADP 1.2 Data Mover, set the deletion policy type to Retain . steps You can now create a Backup CR. 4.8.4. Backing up applications with File System Backup: Kopia or Restic You can use OADP to back up and restore Kubernetes volumes attached to pods from the file system of the volumes. This process is called File System Backup (FSB) or Pod Volume Backup (PVB). It is accomplished by using modules from the open source backup tools Restic or Kopia. If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using FSB. Note Restic is installed by the OADP Operator by default. If you prefer, you can install Kopia instead. FSB integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volumes. This integration is an additional capability of OADP and is not a replacement for existing functionality. You back up Kubernetes resources, internal images, and persistent volumes with Kopia or Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Note In OADP version 1.3 and later, you can use either Kopia or Restic for backing up applications. For the Built-in DataMover, you must use Kopia. In OADP version 1.2 and earlier, you can only use Restic for backing up applications. Important FSB does not support backing up hostPath volumes. For more information, see FSB limitations . PodVolumeRestore fails with a ... /.snapshot: read-only file system error The ... /.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory. Do not give Velero write access to the .snapshot directory, and disable client access to this directory. Additional resources Enable or disable client access to Snapshot copy directory by editing a share Prerequisites for backup and restore with FlashBlade Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default nodeAgent installation by setting spec.configuration.nodeAgent.enable to false in the DataProtectionApplication CR. You must select Kopia or Restic as the uploader by setting spec.configuration.nodeAgent.uploaderType to kopia or restic in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Create the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1 ... 1 In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true setting within the spec block. In OADP version 1.1, add defaultVolumesToRestic: true . 4.8.5. Creating backup hooks When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up. The commands can be configured to performed before any custom action processing ( Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up ( Post hooks). You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all objects. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entry point for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 4.8.6. Scheduling backups using Schedule CR The schedule operation allows you to create a backup of your data at a particular time, specified by a Cron expression. You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR. Warning Leave enough time in your backup schedule for a backup to finish before another backup is created. For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Procedure Retrieve the backupStorageLocations CRs: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Schedule CR, as in the following example: USD cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF Note To schedule a backup at specific intervals, enter the <duration_in_minutes> in the following format: schedule: "*/10 * * * *" Enter the minutes value between quotation marks ( " " ). 1 cron expression to schedule the backup, for example, 0 7 * * * to perform a backup every day at 7:00. 2 Array of namespaces to back up. 3 Name of the backupStorageLocations CR. 4 Optional: In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true key-value pair to your configuration when performing backups of volumes with Restic. In OADP version 1.1, add the defaultVolumesToRestic: true key-value pair when you back up volumes with Restic. 5 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Verify that the status of the Schedule CR is Completed after the scheduled backup runs: USD oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}' 4.8.7. Deleting backups You can delete a backup by creating the DeleteBackupRequest custom resource (CR) or by running the velero backup delete command as explained in the following procedures. The volume backup artifacts are deleted at different times depending on the backup method: Restic: The artifacts are deleted in the full maintenance cycle, after the backup is deleted. Container Storage Interface (CSI): The artifacts are deleted immediately when the backup is deleted. Kopia: The artifacts are deleted after three full maintenance cycles of the Kopia repository, after the backup is deleted. 4.8.7.1. Deleting a backup by creating a DeleteBackupRequest CR You can delete a backup by creating a DeleteBackupRequest custom resource (CR). Prerequisites You have run a backup of your application. Procedure Create a DeleteBackupRequest CR manifest file: apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1 1 Specify the name of the backup. Apply the DeleteBackupRequest CR to delete the backup: USD oc apply -f <deletebackuprequest_cr_filename> 4.8.7.2. Deleting a backup by using the Velero CLI You can delete a backup by using the Velero CLI. Prerequisites You have run a backup of your application. You downloaded the Velero CLI and can access the Velero binary in your cluster. Procedure To delete the backup, run the following Velero command: USD velero backup delete <backup_name> -n openshift-adp 1 1 Specify the name of the backup. 4.8.7.3. About Kopia repository maintenance There are two types of Kopia repository maintenance: Quick maintenance Runs every hour to keep the number of index blobs (n) low. A high number of indexes negatively affects the performance of Kopia operations. Does not delete any metadata from the repository without ensuring that another copy of the same metadata exists. Full maintenance Runs every 24 hours to perform garbage collection of repository contents that are no longer needed. snapshot-gc , a full maintenance task, finds all files and directory listings that are no longer accessible from snapshot manifests and marks them as deleted. A full maintenance is a resource-costly operation, as it requires scanning all directories in all snapshots that are active in the cluster. 4.8.7.3.1. Kopia maintenance in OADP The repo-maintain-job jobs are executed in the namespace where OADP is installed, as shown in the following example: pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m You can check the logs of the repo-maintain-job for more details about the cleanup and the removal of artifacts in the backup object storage. You can find a note, as shown in the following example, in the repo-maintain-job when the full cycle maintenance is due: not due for full maintenance cycle until 2024-00-00 18:29:4 Important Three successful executions of a full maintenance cycle are required for the objects to be deleted from the backup object storage. This means you can expect up to 72 hours for all the artifacts in the backup object storage to be deleted. 4.8.7.4. Deleting a backup repository After you delete the backup, and after the Kopia repository maintenance cycles to delete the related artifacts are complete, the backup is no longer referenced by any metadata or manifest objects. You can then delete the backuprepository custom resource (CR) to complete the backup deletion process. Prerequisites You have deleted the backup of your application. You have waited up to 72 hours after the backup is deleted. This time frame allows Kopia to run the repository maintenance cycles. Procedure To get the name of the backup repository CR for a backup, run the following command: USD oc get backuprepositories.velero.io -n openshift-adp To delete the backup repository CR, run the following command: USD oc delete backuprepository <backup_repository_name> -n openshift-adp 1 1 Specify the name of the backup repository from the earlier step. 4.8.8. About Kopia Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice. Kopia supports network and local storage locations, and many cloud or remote storage locations, including: Amazon S3 and any cloud storage that is compatible with S3 Azure Blob Storage Google Cloud Storage platform Kopia uses content-addressable storage for snapshots: Snapshots are always incremental; data that is already included in snapshots is not re-uploaded to the repository. A file is only uploaded to the repository again if it is modified. Stored data is deduplicated; if multiple copies of the same file exist, only one of them is stored. If files are moved or renamed, Kopia can recognize that they have the same content and does not upload them again. 4.8.8.1. OADP integration with Kopia OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia . If you do not specify an uploaderType , OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository. The following example shows a DataProtectionApplication CR configured for using Kopia: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia # ... 4.9. OADP restoring 4.9.1. Restoring applications You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR . You can create restore hooks to run commands in a container in a pod by editing the Restore CR. See Creating restore hooks . 4.9.1.1. Previewing resources before running backup and restore OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations. Prerequisites You have installed the OADP Operator. Procedure To preview the resources included in the backup before running the actual backup, run the following command: USD velero backup create <backup-name> --snapshot-volumes false 1 1 Specify the value of --snapshot-volumes parameter as false . To know more details about the backup resources, run the following command: USD velero describe backup <backup_name> --details 1 1 Specify the name of the backup. To preview the resources included in the restore before running the actual restore, run the following command: USD velero restore create --from-backup <backup-name> 1 1 Specify the name of the backup created to review the backup resources. Important The velero restore create command creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources. To know more details about the restore resources, run the following command: USD velero describe restore <restore_name> --details 1 1 Specify the name of the restore. 4.9.1.2. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you restore DeploymentConfig with volumes or if you use post-restore hooks, run the dc-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh -> dc-post-restore.sh Note During the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow the restore and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales any DeploymentConfig objects back up to the appropriate number of replicas. Example 4.1. dc-restic-post-restore.sh dc-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo "restore: USD1" label=USD(label_name USD1) echo "label: USDlabel" echo Deleting disconnected restore pods oc delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done 4.9.1.3. Creating restore hooks You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum length of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . Important During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely. This happens because, during the restore operation, OpenShift controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB and the post restore hook. For more information about image stream trigger, see "Triggering updates on image stream changes". The workaround for this behavior is a two-step restore process: First, perform a restore excluding the Deployment resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps After the first restore is successful, perform a second restore by including these resources, for example: USD velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps Additional resources Triggering updates on image stream changes 4.10. OADP and ROSA 4.10.1. Backing up applications on ROSA clusters using OADP You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to back up and restore application data. ROSA is a fully-managed, turnkey application platform that allows you to deliver value to your customers by building and deploying applications. ROSA provides seamless integration with a wide range of Amazon Web Services (AWS) compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivery of differentiating experiences to your customers. You can subscribe to the service directly from your AWS account. After you create your clusters, you can operate your clusters with the OpenShift Container Platform web console or through Red Hat OpenShift Cluster Manager . You can also use ROSA with OpenShift APIs and command-line interface (CLI) tools. For additional information about ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough . Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials Install the OADP Operator and give it an IAM role 4.10.1.1. Preparing AWS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Procedure Create the following environment variables by running the following commands: Important Change the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing. USD export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} echo "Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" 1 Replace my-cluster with your ROSA cluster name. On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following command: USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) 1 1 Replace RosaOadp with your policy name. Enter the following command to create the policy JSON file and then create the policy in ROSA: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUploads", "s3:ListMultipartUploadParts", "s3:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name "RosaOadpVer1" \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \ --output text) fi 1 SCRATCH is a name for a temporary directory created for the environment variables. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version":2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create the role by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" \ --policy-arn USD{POLICY_ARN} 4.10.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. OpenShift Container Platform (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on ROSA with AWS STS. Important Restic is unsupported. Kopia file system backup (FSB) is supported when backing up file systems that do not have Container Storage Interface (CSI) snapshotting support. Example file systems include the following: Amazon Elastic File System (EFS) Network File System (NFS) emptyDir volumes Local volumes For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an Amazon ROSA cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform ROSA cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF 1 Replace <aws_region> with the AWS region to use for the STS endpoint. Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 ROSA supports internal image backup. Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important The enable parameter of restic is set to false in this configuration, because OADP does not support Restic in ROSA environments. If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. Additional resources Installing from OperatorHub using the web console . Backing up applications 4.10.1.3. Example: Backing up workload on OADP ROSA STS, with an optional cleanup 4.10.1.3.1. Performing a backup with OADP and ROSA STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) STS. Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup is completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.10.1.3.2. Cleaning up a cluster after a backup with OADP and ROSA STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Warning If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3 run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.11. OADP and AWS STS 4.11.1. Backing up applications on AWS STS using OADP You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14 . Note Starting from OADP 1.0.4, all OADP 1.0. z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. You configure AWS for Velero, create a default Secret , and then install the Data Protection Application. For more details, see Installing the OADP Operator . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. You can install OADP on an AWS Security Token Service (STS) (AWS STS) cluster manually. Amazon AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. You use STS to provide trusted users with temporary access to resources via API calls, your AWS console, or the AWS command line interface (CLI). Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API. This process is performed in the following two stages: Prepare AWS credentials. Install the OADP Operator and give it an IAM role. 4.11.1.1. Preparing AWS STS credentials for OADP An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Prepare the AWS credentials by using the following procedure. Procedure Define the cluster_name environment variable by running the following command: USD export CLUSTER_NAME= <AWS_cluster_name> 1 1 The variable can be set to any value. Retrieve all of the details of the cluster such as the AWS_ACCOUNT_ID, OIDC_ENDPOINT by running the following command: USD export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{"\n"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{"\n"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME="USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials" Create a temporary directory to store all of the files by running the following command: USD export SCRATCH="/tmp/USD{CLUSTER_NAME}/oadp" mkdir -p USD{SCRATCH} Display all of the gathered details by running the following command: USD echo "Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}" On the AWS account, create an IAM policy to allow access to AWS S3: Check to see if the policy exists by running the following commands: USD export POLICY_NAME="OadpVer1" 1 1 The variable can be set to any value. USD POLICY_ARN=USD(aws iam list-policies --query "Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}" --output text) Enter the following command to create the policy JSON file and then create the policy: Note If the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the if statement intentionally skips the policy creation. USD if [[ -z "USD{POLICY_ARN}" ]]; then cat << EOF > USD{SCRATCH}/policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts", "ec2:DescribeSnapshots", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME \ --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn \ --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp \ --output text) 1 fi 1 SCRATCH is a name for a temporary directory created for storing the files. View the policy ARN by running the following command: USD echo USD{POLICY_ARN} Create an IAM role trust policy for the cluster: Create the trust policy file by running the following command: USD cat <<EOF > USD{SCRATCH}/trust-policy.json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_ENDPOINT}:sub": [ "system:serviceaccount:openshift-adp:openshift-adp-controller-manager", "system:serviceaccount:openshift-adp:velero"] } } }] } EOF Create an IAM role trust policy for the cluster by running the following command: USD ROLE_ARN=USD(aws iam create-role --role-name \ "USD{ROLE_NAME}" \ --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text) View the role ARN by running the following command: USD echo USD{ROLE_ARN} Attach the IAM policy to the IAM role by running the following command: USD aws iam attach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn USD{POLICY_ARN} 4.11.1.1.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. Note Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. 4.11.1.2. Installing the OADP Operator and providing the IAM role AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. This document describes how to install OpenShift API for Data Protection (OADP) on an AWS STS cluster manually. Important Restic and Kopia are not supported in the OADP AWS STS environment. Verify that the Restic and Kopia node agent is disabled. For backing up volumes, OADP on AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots. In an AWS cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported. The Data Mover feature is not currently supported in AWS STS clusters. You can use native AWS S3 tools for moving data. Prerequisites An OpenShift Container Platform AWS STS cluster with the required access and tokens. For instructions, see the procedure Preparing AWS credentials for OADP . If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including ROLE_ARN , for each cluster. Procedure Create an OpenShift Container Platform secret from your AWS token file by entering the following commands: Create the credentials file: USD cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF Create a namespace for OADP: USD oc create namespace openshift-adp Create the OpenShift Container Platform secret: USD oc -n openshift-adp create secret generic cloud-credentials \ --from-file=USD{SCRATCH}/credentials Note In OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console . The preceding secret is created automatically by CCO. Install the OADP Operator: In the OpenShift Container Platform web console, browse to Operators OperatorHub . Search for the OADP Operator . In the role_ARN field, paste the role_arn that you created previously and click Install . Create AWS cloud storage using your AWS credentials by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF Check your application's storage default storage class by entering the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h Get the storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h Note The following storage classes will work: gp3-csi gp2-csi gp3 gp2 If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration. Create the DataProtectionApplication resource to configure the connection to the storage where the backups and volume snapshots are stored: If you are using only CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF 1 Set this field to false if you do not want to use image backup. If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command: USD cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: "true" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF 1 Set this field to false if you do not want to use image backup. 2 See the important note regarding the nodeAgent attribute. 3 The credentialsFile field is the mounted location of the bucket credential on the pod. 4 The enableSharedConfig field allows the snapshotLocations to share or reuse the credential defined for the bucket. 5 Use the profile name set in the AWS credentials file. 6 Specify region as your AWS region. This must be the same as the cluster region. You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications . Important If you use OADP 1.2, replace this configuration: nodeAgent: enable: false uploaderType: restic with the following configuration: restic: enable: false If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration. Additional resources Installing from OperatorHub using the web console Backing up applications 4.11.1.3. Backing up workload on OADP AWS STS, with an optional cleanup 4.11.1.3.1. Performing a backup with OADP and AWS STS The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) (AWS STS). Either Data Protection Application (DPA) configuration will work. Create a workload to back up by running the following commands: USD oc create namespace hello-world USD oc new-app -n hello-world --image=docker.io/openshift/hello-openshift Expose the route by running the following command: USD oc expose service/hello-openshift -n hello-world Check that the application is working by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Back up the workload by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF Wait until the backup has completed and then run the following command: USD watch "oc -n openshift-adp get backup hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:20:44Z", "expiration": "2022-10-07T22:20:22Z", "formatVersion": "1.1.0", "phase": "Completed", "progress": { "itemsBackedUp": 58, "totalItems": 58 }, "startTimestamp": "2022-09-07T22:20:22Z", "version": 1 } Delete the demo workload by running the following command: USD oc delete ns hello-world Restore the workload from the backup by running the following command: USD cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF Wait for the Restore to finish by running the following command: USD watch "oc -n openshift-adp get restore hello-world -o json | jq .status" Example output { "completionTimestamp": "2022-09-07T22:25:47Z", "phase": "Completed", "progress": { "itemsRestored": 38, "totalItems": 38 }, "startTimestamp": "2022-09-07T22:25:28Z", "warnings": 9 } Check that the workload is restored by running the following command: USD oc -n hello-world get pods Example output NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s Check the JSONPath by running the following command: USD curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'` Example output Hello OpenShift! Note For troubleshooting tips, see the OADP team's troubleshooting documentation . 4.11.1.3.2. Cleaning up a cluster after a backup with OADP and AWS STS If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions. Procedure Delete the workload by running the following command: USD oc delete ns hello-world Delete the Data Protection Application (DPA) by running the following command: USD oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa Delete the cloud storage by running the following command: USD oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp Important If this command hangs, you might need to delete the finalizer by running the following command: USD oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge If the Operator is no longer required, remove it by running the following command: USD oc -n openshift-adp delete subscription oadp-operator Remove the namespace from the Operator by running the following command: USD oc delete ns openshift-adp If the backup and restore resources are no longer required, remove them from the cluster by running the following command: USD oc delete backups.velero.io hello-world To delete backup, restore and remote objects in AWS S3, run the following command: USD velero backup delete hello-world If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command: USD for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done Delete the AWS S3 bucket by running the following commands: USD aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive USD aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp Detach the policy from the role by running the following command: USD aws iam detach-role-policy --role-name "USD{ROLE_NAME}" --policy-arn "USD{POLICY_ARN}" Delete the role by running the following command: USD aws iam delete-role --role-name "USD{ROLE_NAME}" 4.12. OADP Data Mover 4.12.1. About the OADP Data Mover OpenShift API for Data Protection (OADP) includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository. OADP supports CSI snapshots on the following: {odf-full} Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API 4.12.1.1. Data Mover support The OADP built-in Data Mover, which was introduced in OADP 1.3 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads. Supported The Data Mover backups taken with OADP 1.3 can be restored using OADP 1.3, 1.4, and later. This is supported. Not supported Backups taken with OADP 1.1 or OADP 1.2 using the Data Mover feature cannot be restored using OADP 1.3 and later. Therefore, it is not supported. OADP 1.1 and OADP 1.2 are no longer supported. The DataMover feature in OADP 1.1 or OADP 1.2 was a Technology Preview and was never supported. DataMover backups taken with OADP 1.1 or OADP 1.2 cannot be restored on later versions of OADP. 4.12.1.2. Enabling the built-in Data Mover To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository. Example DataProtectionApplication manifest apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI # ... 1 The flag to enable the node agent. 2 The type of uploader. The possible values are restic or kopia . The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of the uploaderType field. 3 The CSI plugin included in the list of default plugins. 4 In OADP 1.3.1 and later, set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 4.12.1.3. Built-in Data Mover controller and custom resource definitions (CRDs) The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore: DataDownload : Represents a data download of a volume snapshot. The CSI plugin creates one DataDownload object per volume to be restored. The DataDownload CR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. DataUpload : Represents a data upload of a volume snapshot. The CSI plugin creates one DataUpload object per CSI snapshot. The DataUpload CR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. BackupRepository : Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested. 4.12.1.4. About incremental back up support OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover: Table 4.6. OADP backup support matrix for containerized workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem S [1] , I [2] S [1] , I [2] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Table 4.7. OADP backup support matrix for OpenShift Virtualization workloads Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover Filesystem N [3] N [3] S [1] S [1] , I [2] Block N [3] N [3] S [1] S [1] , I [2] Backup supported Incremental backup supported Not supported Note The CSI Data Mover backups use Kopia regardless of uploaderType . 4.12.2. Backing up and restoring CSI snapshots data movement You can back up and restore persistent volumes by using the OADP 1.3 Data Mover. 4.12.2.1. Backing up persistent volumes with CSI snapshots You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. Prerequisites You have access to the cluster with the cluster-admin role. You have installed the OADP Operator. You have included the CSI plugin and enabled the node agent in the DataProtectionApplication custom resource (CR). You have an application with persistent volumes running in a separate namespace. You have added the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR. Procedure Create a YAML file for the Backup object, as in the following example: Example Backup CR kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1 # ... 1 Set to true if you use Data Mover only for volumes that opt out of fs-backup . Set to false if you use Data Mover by default for volumes. 2 Set to true to enable movement of CSI snapshots to remote object storage. 3 The ttl field defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. Note If you format the volume by using XFS filesystem and the volume is at 100% capacity, the backup fails with a no space left on device error. For example: Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on device In this scenario, consider resizing the volume or using a different filesystem type, for example, ext4 , so that the backup completes successfully. Apply the manifest: USD oc create -f backup.yaml A DataUpload CR is created after the snapshot creation is complete. Verification Verify that the snapshot data is successfully transferred to the remote object store by monitoring the status.phase field of the DataUpload CR. Possible values are In Progress , Completed , Failed , or Canceled . The object store is configured in the backupLocations stanza of the DataProtectionApplication CR. Run the following command to get a list of all DataUpload objects: USD oc get datauploads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal Check the value of the status.phase field of the specific DataUpload object by running the following command: USD oc get datauploads <dataupload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: "" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: "2023-11-02T16:57:02Z" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: "2023-11-02T16:56:22Z" 1 Indicates that snapshot data is successfully transferred to the remote object store. 4.12.2.2. Restoring CSI volume snapshots You can restore a volume snapshot by creating a Restore CR. Note You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3. Prerequisites You have access to the cluster with the cluster-admin role. You have an OADP Backup CR from which to restore the data. Procedure Create a YAML file for the Restore CR, as in the following example: Example Restore CR apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup> # ... Apply the manifest: USD oc create -f restore.yaml A DataDownload CR is created when the restore starts. Verification You can monitor the status of the restore process by checking the status.phase field of the DataDownload CR. Possible values are In Progress , Completed , Failed , or Canceled . To get a list of all DataDownload objects, run the following command: USD oc get datadownloads -A Example output NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal Enter the following command to check the value of the status.phase field of the specific DataDownload object: USD oc get datadownloads <datadownload_name> -o yaml Example output apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: "" pvc: mysql status: completionTimestamp: "2023-11-02T17:01:24Z" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: "2023-11-02T17:00:52Z" 1 Indicates that the CSI snapshot data is successfully restored. 4.12.2.3. Deletion policy for OADP 1.3 The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information. 4.12.2.3.1. Deletion policy guidelines for OADP 1.3 Review the following deletion policy guidelines for the OADP 1.3: In OADP 1.3.x, when using any type of backup and restore methods, you can set the deletionPolicy field to Retain or Delete in the VolumeSnapshotClass custom resource (CR). 4.12.3. Overriding Kopia hashing, encryption, and splitter algorithms You can override the default values of Kopia hashing, encryption, and splitter algorithms by using specific environment variables in the Data Protection Application (DPA). 4.12.3.1. Configuring the DPA to override Kopia hashing, encryption, and splitter algorithms You can use an OpenShift API for Data Protection (OADP) option to override the default Kopia algorithms for hashing, encryption, and splitter to improve Kopia performance or to compare performance metrics. You can set the following environment variables in the spec.configuration.velero.podConfig.env section of the DPA: KOPIA_HASHING_ALGORITHM KOPIA_ENCRYPTION_ALGORITHM KOPIA_SPLITTER_ALGORITHM Prerequisites You have installed the OADP Operator. You have created the secret by using the credentials provided by the cloud provider. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the DPA with the environment variables for hashing, encryption, and splitter as shown in the following example. Example DPA apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication #... configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6 1 Enable the nodeAgent . 2 Specify the uploaderType as kopia . 3 Include the csi plugin. 4 Specify a hashing algorithm. For example, BLAKE3-256 . 5 Specify an encryption algorithm. For example, CHACHA20-POLY1305-HMAC-SHA256 . 6 Specify a splitter algorithm. For example, DYNAMIC-8M-RABINKARP . 4.12.3.2. Use case for overriding Kopia hashing, encryption, and splitter algorithms The use case example demonstrates taking a backup of an application by using Kopia environment variables for hashing, encryption, and splitter. You store the backup in an AWS S3 bucket. You then verify the environment variables by connecting to the Kopia repository. Prerequisites You have installed the OADP Operator. You have an AWS S3 bucket configured as the backup storage location. You have created the secret by using the credentials provided by the cloud provider. You have installed the Kopia client. You have an application with persistent volumes running in a separate namespace. Procedure Configure the Data Protection Application (DPA) as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8 1 Specify a name for the DPA. 2 Specify the region for the backup storage location. 3 Specify the name of the default Secret object. 4 Specify the AWS S3 bucket name. 5 Include the csi plugin. 6 Specify the hashing algorithm as BLAKE3-256 . 7 Specify the encryption algorithm as CHACHA20-POLY1305-HMAC-SHA256 . 8 Specify the splitter algorithm as DYNAMIC-8M-RABINKARP . Create the DPA by running the following command: USD oc create -f <dpa_file_name> 1 1 Specify the file name of the DPA you configured. Verify that the DPA has reconciled by running the following command: USD oc get dpa -o yaml Create a backup CR as shown in the following example: Example backup CR apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true 1 Specify the namespace for the application installed in the cluster. Create a backup by running the following command: USD oc apply -f <backup_file_name> 1 1 Specify the name of the backup CR file. Verify that the backup completed by running the following command: USD oc get backups.velero.io <backup_name> -o yaml 1 1 Specify the name of the backup. Verification Connect to the Kopia repository by running the following command: USD kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<aws_s3_access_key>" \ 4 --secret-access-key="<aws_s3_secret_access_key>" \ 5 1 Specify the AWS S3 bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the AWS S3 access key. 5 Specify the AWS S3 storage provider secret access key. Note If you are using a storage provider other than AWS S3, you will need to add --endpoint , the bucket endpoint URL parameter, to the command. Verify that Kopia uses the environment variables that are configured in the DPA for the backup by running the following command: USD kopia repository status Example output Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> # ... Storage type: s3 Storage capacity: unbounded Storage config: { "bucket": <bucket_name>, "prefix": "velero/kopia/<application_namespace>/", "endpoint": "s3.amazonaws.com", "accessKeyID": <access_key>, "secretAccessKey": "****************************************", "sessionToken": "" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3 # ... 4.12.3.3. Benchmarking Kopia hashing, encryption, and splitter algorithms You can run Kopia commands to benchmark the hashing, encryption, and splitter algorithms. Based on the benchmarking results, you can select the most suitable algorithm for your workload. In this procedure, you run the Kopia benchmarking commands from a pod on the cluster. The benchmarking results can vary depending on CPU speed, available RAM, disk speed, current I/O load, and so on. Prerequisites You have installed the OADP Operator. You have an application with persistent volumes running in a separate namespace. You have run a backup of the application with Container Storage Interface (CSI) snapshots. Note The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later. To use different Kopia algorithms, ensure that the object storage does not contain any Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration. Procedure Configure the must-gather pod as shown in the following example. Make sure you are using the oadp-mustgather image for OADP version 1.3 and later. Example pod configuration apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: ["sleep"] args: ["infinity"] Note The Kopia client is available in the oadp-mustgather image. Create the pod by running the following command: USD oc apply -f <pod_config_file_name> 1 1 Specify the name of the YAML file for the pod configuration. Verify that the Security Context Constraints (SCC) on the pod is anyuid , so that Kopia can connect to the repository. USD oc describe pod/oadp-mustgather-pod | grep scc Example output openshift.io/scc: anyuid Connect to the pod via SSH by running the following command: USD oc -n openshift-adp rsh pod/oadp-mustgather-pod Connect to the Kopia repository by running the following command: sh-5.1# kopia repository connect s3 \ --bucket=<bucket_name> \ 1 --prefix=velero/kopia/<application_namespace> \ 2 --password=static-passw0rd \ 3 --access-key="<access_key>" \ 4 --secret-access-key="<secret_access_key>" \ 5 --endpoint=<bucket_endpoint> \ 6 1 Specify the object storage provider bucket name. 2 Specify the namespace for the application. 3 This is the Kopia password to connect to the repository. 4 Specify the object storage provider access key. 5 Specify the object storage provider secret access key. 6 Specify the bucket endpoint. You do not need to specify the bucket endpoint, if you are using AWS S3 as the storage provider. Note This is an example command. The command can vary based on the object storage provider. To benchmark the hashing algorithm, run the following command: sh-5.1# kopia benchmark hashing Example output Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256 To benchmark the encryption algorithm, run the following command: sh-5.1# kopia benchmark encryption Example output Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256 To benchmark the splitter algorithm, run the following command: sh-5.1# kopia benchmark splitter Example output splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 # ... FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # ... 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 4.13. Troubleshooting You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool . The Velero CLI tool provides more detailed logs and information. You can check installation issues , backup and restore CR issues , and Restic issues . You can collect logs and CR information by using the must-gather tool . You can obtain the Velero CLI tool by: Downloading the Velero CLI tool Accessing the Velero binary in the Velero deployment in the cluster 4.13.1. Downloading the Velero CLI tool You can download and install the Velero CLI tool by following the instructions on the Velero documentation page . The page includes instructions for: macOS by using Homebrew GitHub Windows by using Chocolatey Prerequisites You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. You have installed kubectl locally. Procedure Open a browser and navigate to "Install the CLI" on the Velero website . Follow the appropriate procedure for macOS, GitHub, or Windows. Download the Velero version appropriate for your version of OADP and OpenShift Container Platform. 4.13.1.1. OADP-Velero-OpenShift Container Platform version relationship OADP version Velero version OpenShift Container Platform version 1.1.0 1.9 4.9 and later 1.1.1 1.9 4.9 and later 1.1.2 1.9 4.9 and later 1.1.3 1.9 4.9 and later 1.1.4 1.9 4.9 and later 1.1.5 1.9 4.9 and later 1.1.6 1.9 4.11 and later 1.1.7 1.9 4.11 and later 1.2.0 1.11 4.11 and later 1.2.1 1.11 4.11 and later 1.2.2 1.11 4.11 and later 1.2.3 1.11 4.11 and later 1.3.0 1.12 4.10 - 4.15 1.3.1 1.12 4.10 - 4.15 1.3.2 1.12 4.10 - 4.15 1.3.3 1.12 4.10 - 4.15 1.4.0 1.14 4.14 and later 1.4.1 1.14 4.14 and later 4.13.2. Accessing the Velero binary in the Velero deployment in the cluster You can use a shell command to access the Velero binary in the Velero deployment in the cluster. Prerequisites Your DataProtectionApplication custom resource has a status of Reconcile complete . Procedure Enter the following command to set the needed alias: USD alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' 4.13.3. Debugging Velero resources with the OpenShift CLI tool You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool. Velero CRs Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc describe <velero_cr> <cr_name> Velero pod logs Use the oc logs command to retrieve the Velero pod logs: USD oc logs pod/<velero> Velero pod debug logs You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example. Note This option is available starting from OADP 1.0.3. apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning The following logLevel values are available: trace debug info warning error fatal panic It is recommended to use debug for most logs. 4.13.4. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 4.13.5. Pods crash or restart due to lack of memory or CPU If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources. Additional resources CPU and memory requirements 4.13.5.1. Setting resource requests for a Velero pod You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Velero file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi 1 The resourceAllocations listed are for average usage. 4.13.5.2. Setting resource requests for a Restic pod You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod. Procedure Set the cpu and memory resource requests in the YAML file: Example Restic file apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi 1 The resourceAllocations listed are for average usage. Important The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations , the default resources specification for a Velero pod or a Restic pod is as follows: requests: cpu: 500m memory: 128Mi 4.13.6. PodVolumeRestore fails to complete when StorageClass is NFS The restore operation fails when there is more than one volume during a NFS restore by using Restic or Kopia . PodVolumeRestore either fails with the following error or keeps trying to restore before finally failing. Error message Velero: pod volume restore failed: data path restore failed: \ Failed to run kopia restore: Failed to copy snapshot data to the target: \ restore error: copy file: error creating file: \ open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: \ no such file or directory Cause The NFS mount path is not unique for the two volumes to restore. As a result, the velero lock files use the same file on the NFS server during the restore, causing the PodVolumeRestore to fail. Solution You can resolve this issue by setting up a unique pathPattern for each volume, while defining the StorageClass for nfs-subdir-external-provisioner in the deploy/class.yaml file. Use the following nfs-subdir-external-provisioner StorageClass example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: "USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}" 1 onDelete: delete 1 Specifies a template for creating a directory path by using PVC metadata such as labels, annotations, name, or namespace. To specify metadata, use USD{.PVC.<metadata>} . For example, to name a folder: <pvc-namespace>-<pvc-name> , use USD{.PVC.namespace}-USD{.PVC.name} as pathPattern . 4.13.7. Issues with Velero and admission webhooks Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources. For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use. 4.13.7.1. Restoring workarounds for Velero backups that use admission webhooks This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks. 4.13.7.1.1. Restoring Knative resources You might encounter problems using Velero to back up Knative resources that use admission webhooks. You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks. Procedure Restore the top level service.serving.knavtive.dev Service resource: USD velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev 4.13.7.1.2. Restoring IBM AppConnect resources If you experience issues when you use Velero to a restore an IBM AppConnect resource that has an admission webhook, you can run the checks in this procedure. Procedure Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster: USD oc get mutatingwebhookconfigurations Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation . Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator. 4.13.7.2. OADP plugins known issues The following section describes known issues in OpenShift API for Data Protection (OADP) plugins: 4.13.7.2.1. Velero plugin panics during imagestream backups due to a missing secret When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret . When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error: 024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94... 4.13.7.2.1.1. Workaround to avoid the panic error To avoid the Velero plugin panic error, perform the following steps: Label the custom BSL with the relevant label: USD oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl After the BSL is labeled, wait until the DPA reconciles. Note You can force the reconciliation by making any minor change to the DPA itself. When the DPA reconciles, confirm that the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret has been created and that the correct registry data has been populated into it: USD oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data' 4.13.7.2.2. OpenShift ADP Controller segmentation fault If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault. You can have either velero or cloudstorage defined, because they are mutually exclusive fields. If you have both velero and cloudstorage defined, the openshift-adp-controller-manager fails. If you have neither velero nor cloudstorage defined, the openshift-adp-controller-manager fails. For more information about this issue, see OADP-1054 . 4.13.7.2.2.1. OpenShift ADP Controller segmentation fault workaround You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault. 4.13.7.3. Velero plugins returning "received EOF, stopping recv loop" message Note Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred. Additional resources Admission plugins Webhook admission plugins Types of webhook admission plugins 4.13.8. Installation issues You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application. 4.13.8.1. Backup storage contains invalid directories The Velero pod log displays the error message, Backup storage contains invalid top-level directories . Cause The object storage contains top-level directories that are not Velero directories. Solution If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest. 4.13.8.2. Incorrect AWS credentials The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records. The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain . Cause The credentials-velero file used to create the Secret object is incorrectly formatted. Solution Ensure that the credentials-velero file is correctly formatted, as in the following example: Example credentials-velero file 1 AWS default profile. 2 Do not enclose the values with quotation marks ( " , ' ). 4.13.9. OADP Operator issues The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve. 4.13.9.1. OADP Operator fails silently The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace> , you see that the Operator has a status of Running . In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running. Cause The problem is caused when cloud credentials provide insufficient permissions. Solution Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues. Procedure Run one of the following commands to retrieve a list of BSLs: Using the OpenShift CLI: USD oc get backupstoragelocations.velero.io -A Using the Velero CLI: USD velero backup-location get -n <OADP_Operator_namespace> Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error. USD oc get backupstoragelocations.velero.io -n <namespace> -o yaml Example result apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: "2023-11-03T19:49:04Z" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: "24273698" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: "true" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: "2023-11-10T22:06:46Z" message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54" phase: Unavailable kind: List metadata: resourceVersion: "" 4.13.10. OADP timeouts Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures. Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance. The following are various OADP timeouts, with instructions of how and when to implement these parameters: 4.13.10.1. Restic timeout The spec.configuration.nodeAgent.timeout parameter defines the Restic timeout. The default value is 1h . Use the Restic timeout parameter in the nodeAgent section for the following scenarios: For Restic backups with total PV data usage that is greater than 500GB. If backups are timing out with the following error: level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" Procedure Edit the values in the spec.configuration.nodeAgent.timeout block of the DataProtectionApplication custom resource (CR) manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h # ... 4.13.10.2. Velero resource timeout resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m . Use the resourceTimeout for the following scenarios: For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete. A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task. To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia. To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup. Procedure Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m # ... 4.13.10.3. Data Mover timeout timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore . The default value is 10m . Use the Data Mover timeout for the following scenarios: If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes. For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h . With the VolumeSnapshotMover (VSM) plugin. Only with OADP 1.1.x. Procedure Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m # ... 4.13.10.4. CSI snapshot timeout CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse , before returning error as timeout. The default value is 10m . Use the CSISnapshotTimeout for the following scenarios: With the CSI plugin. For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs. Note Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes. Procedure Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m # ... 4.13.10.5. Velero default item operation timeout defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h . Use the defaultItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature. When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout , it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections. Procedure Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h # ... 4.13.10.6. Item operation timeout - restore ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h . Use the restore ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h # ... 4.13.10.7. Item operation timeout - backup ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h . Use the backup ItemOperationTimeout for the following scenarios: Only with Data Mover 1.2.x. For Data Mover uploads and downloads to or from the BackupStorageLocation . If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased. Procedure Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h # ... 4.13.11. Backup and Restore CR issues You might encounter these common issues with Backup and Restore custom resources (CRs). 4.13.11.1. Backup CR cannot retrieve volume The Backup CR displays the error message, InvalidVolume.NotFound: The volume 'vol-xxxx' does not exist . Cause The persistent volume (PV) and the snapshot locations are in different regions. Solution Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV. Create a new Backup CR. 4.13.11.2. Backup CR status remains in progress The status of a Backup CR remains in the InProgress phase and does not complete. Cause If a backup is interrupted, it cannot be resumed. Solution Retrieve the details of the Backup CR: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup> Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage. Create a new Backup CR. View the Velero backup details USD velero backup describe <backup-name> --details 4.13.11.3. Backup CR status remains in PartiallyFailed The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created. Cause If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following: + time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq Solution Delete the Backup CR: USD oc delete backups.velero.io <backup> -n openshift-adp If required, clean up the stored data on the BackupStorageLocation to free up space. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object: USD oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true Create a new Backup CR. 4.13.12. Restic issues You might encounter these issues when you back up applications with Restic. 4.13.12.1. Restic permission error for NFS data volumes with root_squash enabled The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied" . Cause If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups. Solution You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest: Create a supplemental group for Restic on the NFS data volume. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the spec.configuration.nodeAgent.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as shown in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # ... spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1 # ... 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 4.13.12.2. Restic Backup CR cannot be recreated after bucket is emptied If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails. The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location? . Cause Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information. Solution Remove the related Restic repository from the namespace by running the following command: USD oc delete resticrepository openshift-adp <name_of_the_restic_repository> In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity. time="2021-12-29T18:29:14Z" level=info msg="1 errors encountered backup up item" backup=velero/backup65 logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds time="2021-12-29T18:29:14Z" level=error msg="Error backing up item" backup=velero/backup65 error="pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \n: exit status 1" error.file="/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184" error.function="github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds 4.13.13. Using the must-gather tool You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can run the must-gather tool with the following data collection options: Full must-gather data collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. Essential must-gather data collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. must-gather data collection with timeout. Data collection can take a long time if there are many failed Backup CRs. You can improve performance by setting a timeout value. Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. You must use Red Hat Enterprise Linux (RHEL) 9.x with OADP 1.3. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: Full must-gather data collection, including Prometheus metrics: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 The data is saved as must-gather/must-gather.tar.gz . You can upload this file to a support case on the Red Hat Customer Portal . Essential must-gather data collection, without Prometheus metrics, for a specific time duration: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_<time>_essential 1 1 Specify the time in hours. Allowed values are 1h , 6h , 24h , 72h , or all , for example, gather_1h_essential or gather_all_essential . must-gather data collection with timeout: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 \ -- /usr/bin/gather_with_timeout <timeout> 1 1 Specify a timeout value in seconds. Prometheus metrics data dump: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz . Additional resources Gathering cluster data 4.13.13.1. Using must-gather with insecure TLS connections If a custom CA certificate is used, the must-gather pod fails to grab the output for velero logs/describe . To use the must-gather tool with insecure TLS connections, you can pass the gather_without_tls flag to the must-gather command. Procedure Pass the gather_without_tls flag, with value set to true , to the must-gather tool by using the following command: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false> By default, the flag value is set to false . Set the value to true to allow insecure TLS connections. 4.13.13.2. Combining options when using the must-gather tool Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following example: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds> In this example, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls . The only other variables that you can specify this way are the following: logs_since , with a default value of 72h request_timeout , with a default value of 0s If DataProtectionApplication custom resource (CR) is configured with s3Url and insecureSkipTLS: true , the CR does not collect the necessary logs because of a missing CA certificate. To collect those logs, run the must-gather command with the following option: USD oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true 4.13.14. OADP Monitoring The OpenShift Container Platform provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs. Additional resources Monitoring stack 4.13.14.1. OADP monitoring setup The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end. With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics. Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have created a cluster monitoring config map. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring Add or enable the enableUserWorkload option in the data section's config.yaml field: apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata: # ... 1 Add this option or set to true Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace: USD oc get pods -n openshift-user-workload-monitoring Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring . If it exists, skip the remaining steps in this procedure. USD oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name: Example output apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the 2_configure_user_workload_monitoring.yaml file: USD oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created 4.13.14.2. Creating OADP service monitor OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service. Get details about the service by running the following commands: Procedure Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object. USD oc get svc -n openshift-adp -l app.kubernetes.io/name=velero Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml . The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides. Example ServiceMonitor object apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: "velero" Apply the 3_create_oadp_service_monitor.yaml file: USD oc apply -f 3_create_oadp_service_monitor.yaml Example output servicemonitor.monitoring.coreos.com/oadp-service-monitor created Verification Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console: Navigate to the Observe Targets page. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field. Verify that the status for the Status for the service monitor is Up . Figure 4.1. OADP metrics targets 4.13.14.3. Creating an alerting rule The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring. Procedure Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml . Sample OADPBackupFailing alert apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0 for: 5m labels: severity: warning In this sample, the Alert displays under the following conditions: There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes. If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace: USD oc apply -f 4_create_oadp_alert_rule.yaml Example output prometheusrule.monitoring.coreos.com/sample-oadp-alert created Verification After the Alert is triggered, you can view it in the following ways: In the Developer perspective, select the Observe menu. In the Administrator perspective under the Observe Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed. Figure 4.2. OADP backup failing alert Additional resources Managing alerts 4.13.14.4. List of available metrics These are the list of metrics provided by the OADP together with their Types . Metric name Description Type kopia_content_cache_hit_bytes Number of bytes retrieved from the cache Counter kopia_content_cache_hit_count Number of times content was retrieved from the cache Counter kopia_content_cache_malformed Number of times malformed content was read from the cache Counter kopia_content_cache_miss_count Number of times content was not found in the cache and fetched Counter kopia_content_cache_missed_bytes Number of bytes retrieved from the underlying storage Counter kopia_content_cache_miss_error_count Number of times content could not be found in the underlying storage Counter kopia_content_cache_store_error_count Number of times content could not be saved in the cache Counter kopia_content_get_bytes Number of bytes retrieved using GetContent() Counter kopia_content_get_count Number of times GetContent() was called Counter kopia_content_get_error_count Number of times GetContent() was called and the result was an error Counter kopia_content_get_not_found_count Number of times GetContent() was called and the result was not found Counter kopia_content_write_bytes Number of bytes passed to WriteContent() Counter kopia_content_write_count Number of times WriteContent() was called Counter velero_backup_attempt_total Total number of attempted backups Counter velero_backup_deletion_attempt_total Total number of attempted backup deletions Counter velero_backup_deletion_failure_total Total number of failed backup deletions Counter velero_backup_deletion_success_total Total number of successful backup deletions Counter velero_backup_duration_seconds Time taken to complete backup, in seconds Histogram velero_backup_failure_total Total number of failed backups Counter velero_backup_items_errors Total number of errors encountered during backup Gauge velero_backup_items_total Total number of items backed up Gauge velero_backup_last_status Last status of the backup. A value of 1 is success, 0. Gauge velero_backup_last_successful_timestamp Last time a backup ran successfully, Unix timestamp in seconds Gauge velero_backup_partial_failure_total Total number of partially failed backups Counter velero_backup_success_total Total number of successful backups Counter velero_backup_tarball_size_bytes Size, in bytes, of a backup Gauge velero_backup_total Current number of existent backups Gauge velero_backup_validation_failure_total Total number of validation failed backups Counter velero_backup_warning_total Total number of warned backups Counter velero_csi_snapshot_attempt_total Total number of CSI attempted volume snapshots Counter velero_csi_snapshot_failure_total Total number of CSI failed volume snapshots Counter velero_csi_snapshot_success_total Total number of CSI successful volume snapshots Counter velero_restore_attempt_total Total number of attempted restores Counter velero_restore_failed_total Total number of failed restores Counter velero_restore_partial_failure_total Total number of partially failed restores Counter velero_restore_success_total Total number of successful restores Counter velero_restore_total Current number of existent restores Gauge velero_restore_validation_failed_total Total number of failed restores failing validations Counter velero_volume_snapshot_attempt_total Total number of attempted volume snapshots Counter velero_volume_snapshot_failure_total Total number of failed volume snapshots Counter velero_volume_snapshot_success_total Total number of successful volume snapshots Counter 4.13.14.5. Viewing metrics using the Observe UI You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project. Procedure Navigate to the Observe Metrics page: If you are using the Developer perspective, follow these steps: Select Custom query , or click on the Show PromQL link. Type the query and click Enter . If you are using the Administrator perspective, type the expression in the text field and select Run Queries . Figure 4.3. OADP metrics query 4.14. APIs used with OADP The document provides information about the following APIs that you can use with OADP: Velero API OADP API 4.14.1. Velero API Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types . 4.14.2. OADP API The following tables provide the structure of the OADP API: Table 4.8. DataProtectionApplicationSpec Property Type Description backupLocations [] BackupLocation Defines the list of configurations to use for BackupStorageLocations . snapshotLocations [] SnapshotLocation Defines the list of configurations to use for VolumeSnapshotLocations . unsupportedOverrides map [ UnsupportedImageKey ] string Can be used to override the deployed dependent images for development. Options are veleroImageFqin , awsPluginImageFqin , openshiftPluginImageFqin , azurePluginImageFqin , gcpPluginImageFqin , csiPluginImageFqin , dataMoverImageFqin , resticRestoreImageFqin , kubevirtPluginImageFqin , and operator-type . podAnnotations map [ string ] string Used to add annotations to pods deployed by Operators. podDnsPolicy DNSPolicy Defines the configuration of the DNS of a pod. podDnsConfig PodDNSConfig Defines the DNS parameters of a pod in addition to those generated from DNSPolicy . backupImages * bool Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. configuration * ApplicationConfig Used to define the data protection application's server configuration. features * Features Defines the configuration for the DPA to enable the Technology Preview features. Complete schema definitions for the OADP API . Table 4.9. BackupLocation Property Type Description velero * velero.BackupStorageLocationSpec Location to store volume snapshots, as described in Backup Storage Location . bucket * CloudStorageLocation [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. Important The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Complete schema definitions for the type BackupLocation . Table 4.10. SnapshotLocation Property Type Description velero * VolumeSnapshotLocationSpec Location to store volume snapshots, as described in Volume Snapshot Location . Complete schema definitions for the type SnapshotLocation . Table 4.11. ApplicationConfig Property Type Description velero * VeleroConfig Defines the configuration for the Velero server. restic * ResticConfig Defines the configuration for the Restic server. Complete schema definitions for the type ApplicationConfig . Table 4.12. VeleroConfig Property Type Description featureFlags [] string Defines the list of features to enable for the Velero instance. defaultPlugins [] string The following types of default Velero plugins can be installed: aws , azure , csi , gcp , kubevirt , and openshift . customPlugins [] CustomPlugin Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins restoreResourcesVersionPriority string Represents a config map that is created if defined for use in conjunction with the EnableAPIGroupVersions feature flag. Defining this field automatically adds EnableAPIGroupVersions to the Velero server feature flag. noDefaultBackupLocation bool To install Velero without a default backup storage location, you must set the noDefaultBackupLocation flag in order to confirm installation. podConfig * PodConfig Defines the configuration of the Velero pod. logLevel string Velero server's log level (use debug for the most granular logging, leave unset for Velero default). Valid options are trace , debug , info , warning , error , fatal , and panic . Complete schema definitions for the type VeleroConfig . Table 4.13. CustomPlugin Property Type Description name string Name of custom plugin. image string Image of custom plugin. Complete schema definitions for the type CustomPlugin . Table 4.14. ResticConfig Property Type Description enable * bool If set to true , enables backup and restore using Restic. If set to false , snapshots are needed. supplementalGroups [] int64 Defines the Linux groups to be applied to the Restic pod. timeout string A user-supplied duration string that defines the Restic timeout. Default value is 1hr (1 hour). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . podConfig * PodConfig Defines the configuration of the Restic pod. Complete schema definitions for the type ResticConfig . Table 4.15. PodConfig Property Type Description nodeSelector map [ string ] string Defines the nodeSelector to be supplied to a Velero podSpec or a Restic podSpec . For more details, see Configuring node agents and node labels . tolerations [] Toleration Defines the list of tolerations to be applied to a Velero deployment or a Restic daemonset . resourceAllocations ResourceRequirements Set specific resource limits and requests for a Velero pod or a Restic pod as described in Setting Velero CPU and memory resource allocations . labels map [ string ] string Labels to add to pods. 4.14.2.1. Configuring node agents and node labels The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node. The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label: USD oc label node/<node_name> node-role.kubernetes.io/nodeAgent="" Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector , which you used for labeling nodes. For example: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: "" The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""' , are on the node: configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/worker: "" Complete schema definitions for the type PodConfig . Table 4.16. Features Property Type Description dataMover * DataMover Defines the configuration of the Data Mover. Complete schema definitions for the type Features . Table 4.17. DataMover Property Type Description enable bool If set to true , deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to false , these are not deployed. credentialName string User-supplied Restic Secret name for Data Mover. timeout string A user-supplied duration string for VolumeSnapshotBackup and VolumeSnapshotRestore to complete. Default is 10m (10 minutes). A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as 300ms , -1.5h` or 2h45m . Valid time units are ns , us (or ms ), ms , s , m , and h . The OADP API is more fully detailed in OADP Operator . 4.15. Advanced OADP features and functionalities This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP). 4.15.1. Working with different Kubernetes API versions on the same cluster 4.15.1.1. Listing the Kubernetes API group versions on a cluster A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups. If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API. To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1 . Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster. Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources. Procedure Enter the following command: USD oc api-resources 4.15.1.2. About Enable API Group Versions By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions , that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API. Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example , which is example.com/v1 . With the feature enabled, Velero also backs up example.com/v1beta2 . When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions. Note Enable API Group Versions is still in beta. Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority: Preferred version of the destination cluster Preferred version of the source_ cluster Common non-preferred supported version with the highest Kubernetes version priority Additional resources Enable API Group Versions Feature 4.15.1.3. Using Enable API Group Versions You can use Velero's Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one. Note Enable API Group Versions is still in beta. Procedure Configure the EnableAPIGroupVersions feature flag: apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication ... spec: configuration: velero: featureFlags: - EnableAPIGroupVersions Additional resources Enable API Group Versions Feature 4.15.2. Backing up data from one cluster and restoring it to another cluster 4.15.2.1. About backing up data from one cluster and restoring it on another cluster OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster. You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster. To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster: Operators Use of Velero UID and GID ranges 4.15.2.1.1. Operators You must exclude Operators from the backup of an application for backup and restore to succeed. 4.15.2.1.2. Use of Velero Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots. Note In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic . In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup . You must also use Velero's File System Backup to migrate data between AWS regions or between Microsoft Azure regions. Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster. It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources. 4.15.2.2. About determining which pod volumes to back up Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes. Velero supports two approaches for determining pod volumes. Use the opt-in or the opt-out approach to allow Velero to decide between an FSB, a volume snapshot, or a Data Mover backup. Opt-in approach : With the opt-in approach, volumes are backed up using snapshot or Data Mover by default. FSB is used on specific volumes that are opted-in by annotations. Opt-out approach : With the opt-out approach, volumes are backed up using FSB by default. Snapshots or Data Mover is used on specific volumes that are opted-out by annotations. 4.15.2.2.1. Limitations FSB does not support backing up and restoring hostpath volumes. However, FSB does support backing up and restoring local volumes. Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data . It is essential that you limit access to backup storage. For PVCs, every incremental backup chain is maintained across pod reschedules. For pod volumes that are not PVCs, such as emptyDir volumes, if a pod is deleted or recreated, for example, by a ReplicaSet or a deployment, the backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod. Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up. FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup.. FSB expects volumes to be mounted under <hostPath>/<pod UID> , with <hostPath> being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the <pod UID> subdirectory, and VFSB does not work with them as expected. 4.15.2.2.2. Backing up pod volumes by using the opt-in method You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command. Procedure On each pod that contains one or more volumes that you want to back up, enter the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. 4.15.2.2.3. Backing up pod volumes by using the opt-out method When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions: Volumes that mount the default service account token, secrets, and configuration maps. hostPath volumes You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command. Procedure On each pod that contains one or more volumes that you do not want to back up, run the following command: USD oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n> where: <your_volume_name_x> specifies the name of the xth volume in the pod specification. Note You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag. 4.15.2.3. UID and GID ranges If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations: Summary of the issues The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift's UID and GID ranges, see A Guide to OpenShift and UIDs . Detailed description of the issues When you create a namespace in OpenShift Container Platform by using the shell command oc create namespace , OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in the metadata.annotations field of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true: There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore. A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace. This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a securityContext UID to a pod based on namespace annotations that have changed since the persistent volume data was backed up. The UID of the container no longer matches the UID of the file owner. An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster. Mitigations You can use one or more of the following mitigations to resolve the UID and GID range issues: Simple mitigations: If you use a label selector in the Backup CR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name. Advanced mitigations: Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration . Step 1 is optional. For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs . 4.15.2.4. Backing up data from one cluster and restoring it to another cluster In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another. Prerequisites All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide. Procedure Make the following additions to the procedures given for your platform: Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster. Share the same object storage location credentials across the clusters. For best results, use OADP to create the namespace on the destination cluster. If you use the Velero file-system-backup option, enable the --default-volumes-to-fs-backup flag for use during backup by running the following command: USD velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options> Note In OADP 1.2 and later, the Velero Restic option is called file-system-backup . 4.15.3. OADP storage class mapping 4.15.3.1. Storage class mapping Storage class mapping allows you to define rules or policies specifying which storage class should be applied to different types of data. This feature automates the process of determining storage classes based on access frequency, data importance, and cost considerations. It optimizes storage efficiency and cost-effectiveness by ensuring that data is stored in the most suitable storage class for its characteristics and usage patterns. You can use the change-storage-class-config field to change the storage class of your data objects, which lets you optimize costs and performance by moving data between different storage tiers, such as from standard to archival storage, based on your needs and access patterns. 4.15.3.1.1. Storage class mapping with Migration Toolkit for Containers You can use the Migration Toolkit for Containers (MTC) to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster and for storage class mapping and conversion. You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the MTC web console. 4.15.3.1.2. Mapping storage classes with OADP You can use OpenShift API for Data Protection (OADP) with the Velero plugin v1.1.0 and later to change the storage class of a persistent volume (PV) during restores, by configuring a storage class mapping in the config map in the Velero namespace. To deploy ConfigMap with OADP, use the change-storage-class-config field. You must change the storage class mapping based on your cloud provider. Procedure Change the storage class mapping by running the following command: USD cat change-storageclass.yaml Create a config map in the Velero namespace as shown in the following example: Example apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: "" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi Save your storage class mapping preferences by running the following command: USD oc create -f change-storage-class-config 4.15.4. Additional resources Working with different Kubernetes API versions on the same cluster . Using Data Mover for CSI snapshots . Backing up applications with File System Backup: Kopia or Restic . Migration converting storage classes . | [
"spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"spec: configuration: nodeAgent: enable: true uploaderType: restic",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\":2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/backup_and_restore/oadp-application-backup-and-restore |
Recording sessions | Recording sessions Red Hat Enterprise Linux 8 Using the Session Recording solution in Red Hat Enterprise Linux 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/recording_sessions/index |
Chapter 9. Image configuration resources | Chapter 9. Image configuration resources Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a 9.2.1. Adding specific registries You can add a list of registries, and optionally an individual repository within a registry, that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, the allowed registries list is used to update the image signature policy in the /host/etc/containers/policy.json file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/policy.json The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.3. Allowing insecure registries You can add insecure registries, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. You can specify a repository in that registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. To check that the registries have been added to the policy file, use the following command on a node: USD cat /host/etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged because the image might not deploy if the public registry requires authentication. Use fully-qualified image names with public registries. Red Hat internal or private registries typically support the use of image short names. If you list public registries under the containerRuntimeSearchRegistries parameter, you expose your credentials to all the registries on the list and you risk network and registry attacks. You cannot list multiple public registries under the containerRuntimeSearchRegistries parameter if each public registry requires different credentials and a cluster does not list the public registry in the global pull secret. For a public registry that requires authentication, you can use an image short name only if the registry has its credentials stored in the global pull secret. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /host/etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. To check that the registries have been added, when a node returns to the Ready state, use the following command on the node: USD cat /host/etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.2.6. Configuring image registry repository mirroring Setting up container registry repository mirroring enables you to do the following: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. The attributes of repository mirroring in OpenShift Container Platform include: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: Even if you don't configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object. The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. Note You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com . After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml ), replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8 1 Indicates the name of the image registry and repository. 2 Indicates multiple mirror repositories for each target repository. If one mirror is down, the target repository can use another mirror. 3 Indicates the registry and repository containing the content that is mirrored. 4 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry. 5 If you configure the registry name, the ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry. 6 Pulls the image mirror.example.net/image@sha256:... . 7 Pulls the image myimage in the source registry namespace from the mirror mirror.example.net/myimage@sha256:... . 8 Pulls the image registry.example.com/example/myimage from the mirror registry mirror.example.net/registry-example-com/example/myimage@sha256:... . The ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry mirror.example.net/registry-example-com . Create the new ImageContentSourcePolicy object: USD oc create -f registryrepomirror.yaml After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository. To check that the mirrored configuration settings, are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0 The Imagecontentsourcepolicy resource does not restart the nodes. Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi8/ubi-minimal" mirror-by-digest-only = true [[registry.mirror]] location = "example.io/example/ubi-minimal" [[registry.mirror]] location = "example.com/example/ubi-minimal" [[registry]] prefix = "" location = "registry.example.com" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/registry-example-com" [[registry]] prefix = "" location = "registry.example.com/example" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net" [[registry]] prefix = "" location = "registry.example.com/example/myimage" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/image" [[registry]] prefix = "" location = "registry.redhat.io" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com/redhat" Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret . | [
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/policy.json",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /host/etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/image-configuration |
Chapter 29. Configuring the cluster-wide proxy | Chapter 29. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy parameter with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your install-config.yaml file that exists on the supported platform. Note As a postinstallation task, you can change the networking.clusterNetwork[].cidr value, but not the networking.machineNetwork[].cidr and the networking.serviceNetwork[] values. For more information, see "Configuring the cluster network range". For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy parameter is also populated with the instance metadata endpoint, 169.254.169.254 . Example of values added to the status: segment of a Proxy object by RHCOS apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster # ... networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 # ... status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4 # ... 1 Specify IP address blocks from which pod IP addresses are allocated. The default value is 10.128.0.0/14 with a host prefix of /23 . 2 Specify the IP address blocks for machines. The default value is 10.0.0.0/16 . 3 Specify IP address block for services. The default value is 172.30.0.0/16 . 4 You can find the URL of the internal API server by running the oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}' command. Important If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy. 29.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy parameter of the Proxy object to bypass the proxy. 29.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 29.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. 29.4. Verifying the cluster-wide proxy configuration After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation. Prerequisites You have cluster administrator permissions. You have the OpenShift Container Platform oc CLI tool installed. Procedure Check the proxy configuration status using the oc command: USD oc get proxy/cluster -o yaml Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the spec.httpProxy , spec.httpsProxy , spec.noProxy , and spec.trustedCA fields. Inspect the status of the Proxy object: USD oc get proxy/cluster -o jsonpath='{.status}' Example output { status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com } Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully: USD oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name) Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary. Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO): USD oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name) Look for log entries that show that external requests have been routed through the proxy. Additional resources Configuring the cluster network range Understanding the CA Bundle certificate Proxy certificates How is the cluster-wide proxy setting applied to OpenShift Container Platform nodes? | [
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}",
"oc get proxy/cluster -o yaml",
"oc get proxy/cluster -o jsonpath='{.status}'",
"{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }",
"oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)",
"oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/enable-cluster-wide-proxy |
Chapter 53. CruiseControlSpec schema reference | Chapter 53. CruiseControlSpec schema reference Used in: KafkaSpec Full list of CruiseControlSpec schema properties Configures a Cruise Control cluster. Configuration options relate to: Goals configuration Capacity limits for resource distribution goals 53.1. config Use the config properties to configure Cruise Control options as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Cruise Control documentation . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Connection to the Kafka cluster Client ID configuration ZooKeeper connectivity Web server configuration Self healing Properties with the following prefixes cannot be set: bootstrap.servers capacity.config.file client.id failed.brokers.zk.path kafka.broker.failure.detection.enable metric.reporter.sampler.bootstrap.servers network. request.reason.required security. self.healing. ssl. topic.config.provider.class two.step. webserver.accesslog. webserver.api.urlprefix webserver.http. webserver.session.path zookeeper. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Cruise Control, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Configuration for webserver properties to enable Cross-Origin Resource Sharing (CORS) Example Cruise Control configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true webserver.http.cors.origin: "*" webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" # ... 53.2. Cross-Origin Resource Sharing (CORS) Cross-Origin Resource Sharing (CORS) is a HTTP mechanism for controlling access to REST APIs. Restrictions can be on access methods or originating URLs of client applications. You can enable CORS with Cruise Control using the webserver.http.cors.enabled property in the config . When enabled, CORS permits read access to the Cruise Control REST API from applications that have different originating URLs than AMQ Streams. This allows applications from specified origins to use GET requests to fetch information about the Kafka cluster through the Cruise Control API. For example, applications can fetch information on the current cluster load or the most recent optimization proposal. POST requests are not permitted. Note For more information on using CORS with Cruise Control, see REST APIs in the Cruise Control Wiki . Enabling CORS for Cruise Control You enable and configure CORS in Kafka.spec.cruiseControl.config . apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... config: webserver.http.cors.enabled: true 1 webserver.http.cors.origin: "*" 2 webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" 3 # ... 1 Enables CORS. 2 Specifies permitted origins for the Access-Control-Allow-Origin HTTP response header. You can use a wildcard or specify a single origin as a URL. If you use a wildcard, a response is returned following requests from any origin. 3 Exposes specified header names for the Access-Control-Expose-Headers HTTP response header. Applications in permitted origins can read responses with the specified headers. 53.3. Cruise Control REST API security The Cruise Control REST API is secured with HTTP Basic authentication and SSL to protect the cluster against potentially destructive Cruise Control operations, such as decommissioning Kafka brokers. We recommend that Cruise Control in AMQ Streams is only used with these settings enabled . However, it is possible to disable these settings by specifying the following Cruise Control configuration: To disable the built-in HTTP Basic authentication, set webserver.security.enable to false . To disable the built-in SSL, set webserver.ssl.enable to false . Cruise Control configuration to disable API authorization, authentication, and SSL apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false # ... 53.4. brokerCapacity Cruise Control uses capacity limits to determine if optimization goals for resource capacity limits are being broken. There are four goals of this type: DiskCapacityGoal - Disk utilization capacity CpuCapacityGoal - CPU utilization capacity NetworkInboundCapacityGoal - Network inbound utilization capacity NetworkOutboundCapacityGoal - Network outbound utilization capacity You specify capacity limits for Kafka broker resources in the brokerCapacity property in Kafka.spec.cruiseControl . They are enabled by default and you can change their default values. Capacity limits can be set for the following broker resources: cpu - CPU resource in millicores or CPU cores (Default: 1) inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) For network throughput, use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. Note Disk and CPU capacity limits are automatically generated by AMQ Streams, so you do not need to set them. In order to guarantee accurate rebalance proposals when using CPU goals, you can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources . That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals. In cases where you cannot set CPU requests equal to CPU limits in Kafka.spec.kafka.resources , you can set the CPU capacity manually for the same accuracy. Example Cruise Control brokerCapacity configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: cpu: "2" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s # ... 53.5. Capacity overrides Brokers might be running on nodes with heterogeneous network or CPU resources. If that's the case, specify overrides that set the network capacity and CPU limits for each broker. The overrides ensure an accurate rebalance between the brokers. Override capacity limits can be set for the following broker resources: cpu - CPU resource in millicores or CPU cores (Default: 1) inboundNetwork - Inbound network throughput in byte units per second (Default: 10000KiB/s) outboundNetwork - Outbound network throughput in byte units per second (Default: 10000KiB/s) An example of Cruise Control capacity overrides configuration using bibyte units apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: # ... brokerCapacity: cpu: "1" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s overrides: - brokers: [0] cpu: "2.755" inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] cpu: 3000m inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s For more information, refer to the BrokerCapacity schema reference . 53.6. logging Cruise Control has its own configurable logger: rootLogger.level Cruise Control uses the Apache log4j2 logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: inline loggers: rootLogger.level: INFO logger.exec.name: com.linkedin.kafka.cruisecontrol.executor.Executor 1 logger.exec.level: TRACE 2 logger.go.name: com.linkedin.kafka.cruisecontrol.analyzer.GoalOptimizer 3 logger.go.level: DEBUG 4 # ... 1 Creates a logger for the Cruise Control Executor class. 2 Sets the logging level for the Executor class. 3 Creates a logger for the Cruise Control GoalOptimizer class. 4 Sets the logging level for the GoalOptimizer class. Note When investigating an issue with Cruise Control, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: cruiseControl: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 53.7. CruiseControlSpec schema properties Property Description image The docker image for the pods. string tlsSidecar The tlsSidecar property has been deprecated. TLS sidecar configuration. TlsSidecar resources CPU and memory resources to reserve for the Cruise Control container. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking for the Cruise Control container. Probe readinessProbe Pod readiness checking for the Cruise Control container. Probe jvmOptions JVM Options for the Cruise Control container. JvmOptions logging Logging configuration (Log4j 2) for Cruise Control. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template to specify how Cruise Control resources, Deployments and Pods , are generated. CruiseControlTemplate brokerCapacity The Cruise Control brokerCapacity configuration. BrokerCapacity config The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations . Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, capacity.config.file, self.healing., ssl., kafka.broker.failure.detection.enable, topic.config.provider.class (with the exception of: ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, webserver.http.cors.enabled, webserver.http.cors.origin, webserver.http.cors.exposeheaders, webserver.security.enable, webserver.ssl.enable). map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: webserver.http.cors.enabled: true 1 webserver.http.cors.origin: \"*\" 2 webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"2\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"1\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s overrides: - brokers: [0] cpu: \"2.755\" inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] cpu: 3000m inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: inline loggers: rootLogger.level: INFO logger.exec.name: com.linkedin.kafka.cruisecontrol.executor.Executor 1 logger.exec.level: TRACE 2 logger.go.name: com.linkedin.kafka.cruisecontrol.analyzer.GoalOptimizer 3 logger.go.level: DEBUG 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-cruisecontrolspec-reference |
Chapter 5. Using Jobs and DaemonSets | Chapter 5. Using Jobs and DaemonSets 5.1. Running background tasks on nodes automatically with daemon sets As an administrator, you can create and use daemon sets to run replicas of a pod on specific or all nodes in an OpenShift Container Platform cluster. A daemon set ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to the cluster. As nodes are removed from the cluster, those pods are removed through garbage collection. Deleting a daemon set will clean up the pods it created. You can use daemon sets to create shared storage, run a logging pod on every node in your cluster, or deploy a monitoring agent on every node. For security reasons, the cluster administrators and the project administrators can create daemon sets. For more information on daemon sets, see the Kubernetes documentation . Important Daemon set scheduling is incompatible with project's default node selector. If you fail to disable it, the daemon set gets restricted by merging with the default node selector. This results in frequent pod recreates on the nodes that got unselected by the merged node selector, which in turn puts unwanted load on the cluster. 5.1.1. Scheduled by default scheduler A daemon set ensures that all eligible nodes run a copy of a pod. Normally, the node that a pod runs on is selected by the Kubernetes scheduler. However, daemon set pods are created and scheduled by the daemon set controller. That introduces the following issues: Inconsistent pod behavior: Normal pods waiting to be scheduled are created and in Pending state, but daemon set pods are not created in Pending state. This is confusing to the user. Pod preemption is handled by default scheduler. When preemption is enabled, the daemon set controller will make scheduling decisions without considering pod priority and preemption. The ScheduleDaemonSetPods feature, enabled by default in OpenShift Container Platform, lets you schedule daemon sets using the default scheduler instead of the daemon set controller, by adding the NodeAffinity term to the daemon set pods, instead of the spec.nodeName term. The default scheduler is then used to bind the pod to the target host. If node affinity of the daemon set pod already exists, it is replaced. The daemon set controller only performs these operations when creating or modifying daemon set pods, and no changes are made to the spec.template of the daemon set. kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr #... spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #... In addition, a node.kubernetes.io/unschedulable:NoSchedule toleration is added automatically to daemon set pods. The default scheduler ignores unschedulable Nodes when scheduling daemon set pods. 5.1.2. Creating daemonsets When creating daemon sets, the nodeSelector field is used to indicate the nodes on which the daemon set should deploy replicas. Prerequisites Before you start using daemon sets, disable the default project-wide node selector in your namespace, by setting the namespace annotation openshift.io/node-selector to an empty string: USD oc patch namespace myproject -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' Tip You can alternatively apply the following YAML to disable the default project-wide node selector for a namespace: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #... If you are creating a new project, overwrite the default node selector: USD oc adm new-project <name> --node-selector="" Procedure To create a daemon set: Define the daemon set yaml file: apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #... 1 The label selector that determines which pods belong to the daemon set. 2 The pod template's label selector. Must match the label selector above. 3 The node selector that determines on which nodes pod replicas should be deployed. A matching label must be present on the node. Create the daemon set object: USD oc create -f daemonset.yaml To verify that the pods were created, and that each node has a pod replica: Find the daemonset pods: USD oc get pods Example output hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m View the pods to verify the pod has been placed onto the node: USD oc describe pod/hello-daemonset-cx6md|grep Node Example output Node: openshift-node01.hostname.com/10.14.20.134 USD oc describe pod/hello-daemonset-e3md9|grep Node Example output Node: openshift-node02.hostname.com/10.14.20.137 Important If you update a daemon set pod template, the existing pod replicas are not affected. If you delete a daemon set and then create a new daemon set with a different template but the same label selector, it recognizes any existing pod replicas as having matching labels and thus does not update them or create new replicas despite a mismatch in the pod template. If you change node labels, the daemon set adds pods to nodes that match the new labels and deletes pods from nodes that do not match the new labels. To update a daemon set, force new pod replicas to be created by deleting the old replicas or nodes. 5.2. Running tasks in pods using jobs A job executes a task in your OpenShift Container Platform cluster. A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job will clean up any pod replicas it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Sample Job specification apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 #... 1 The pod replicas a job should run in parallel. 2 Successful pod completions are needed to mark a job completed. 3 The maximum duration the job can run. 4 The number of retries for a job. 5 The template for the pod the controller creates. 6 The restart policy of the pod. Additional resources Jobs in the Kubernetes documentation 5.2.1. Understanding jobs and cron jobs A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job cleans up any pods it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. There are two possible resource types that allow creating run-once objects in OpenShift Container Platform: Job A regular job is a run-once object that creates a task and ensures the job finishes. There are three main types of task suitable to run as a job: Non-parallel jobs: A job that starts only one pod, unless the pod fails. The job is complete as soon as its pod terminates successfully. Parallel jobs with a fixed completion count: a job that starts multiple pods. The job represents the overall task and is complete when there is one successful pod for each value in the range 1 to the completions value. Parallel jobs with a work queue: A job with multiple parallel worker processes in a given pod. OpenShift Container Platform coordinates pods to determine what each should work on or use an external queue service. Each pod is independently capable of determining whether or not all peer pods are complete and that the entire job is done. When any pod from the job terminates with success, no new pods are created. When at least one pod has terminated with success and all pods are terminated, the job is successfully completed. When any pod has exited with success, no other pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. For more information about how to make use of the different types of job, see Job Patterns in the Kubernetes documentation. Cron job A job can be scheduled to run multiple times, using a cron job. A cron job builds on a regular job by allowing you to specify how the job should be run. Cron jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. A cron job creates a Job object based on the timezone configured on the control plane node that runs the cronjob controller. Warning A cron job creates a Job object approximately once per execution time of its schedule, but there are circumstances in which it fails to create a job or two jobs might be created. Therefore, jobs must be idempotent and you must configure history limits. 5.2.1.1. Understanding how to create jobs Both resource types require a job configuration that consists of the following key parts: A pod template, which describes the pod that OpenShift Container Platform creates. The parallelism parameter, which specifies how many pods running in parallel at any point in time should execute a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . The completions parameter, specifying how many successful pod completions are needed to finish a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify a value. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 5.2.1.2. Understanding how to set a maximum duration for jobs When defining a job, you can define its maximum duration by setting the activeDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a first pod gets scheduled in the system, and defines how long a job can be active. It tracks overall time of an execution. After reaching the specified timeout, the job is terminated by OpenShift Container Platform. 5.2.1.3. Understanding how to set a job back off policy for pod failure A job can be considered failed, after a set amount of retries due to a logical error in configuration or other similar reasons. Failed pods associated with the job are recreated by the controller with an exponential back off delay ( 10s , 20s , 40s ...) capped at six minutes. The limit is reset if no new failed pods appear between controller checks. Use the spec.backoffLimit parameter to set the number of retries for a job. 5.2.1.4. Understanding how to configure a cron job to remove artifacts Cron jobs can leave behind artifact resources such as jobs or pods. As a user it is important to configure history limits so that old jobs and their pods are properly cleaned. There are two fields within cron job's spec responsible for that: .spec.successfulJobsHistoryLimit . The number of successful finished jobs to retain (defaults to 3). .spec.failedJobsHistoryLimit . The number of failed finished jobs to retain (defaults to 1). Tip Delete cron jobs that you no longer need: USD oc delete cronjob/<cron_job_name> Doing this prevents them from generating unnecessary artifacts. You can suspend further executions by setting the spec.suspend to true. All subsequent executions are suspended until you reset to false . 5.2.1.5. Known limitations The job specification restart policy only applies to the pods , and not the job controller . However, the job controller is hard-coded to keep retrying jobs to completion. As such, restartPolicy: Never or --restart=Never results in the same behavior as restartPolicy: OnFailure or --restart=OnFailure . That is, when a job fails it is restarted automatically until it succeeds (or is manually discarded). The policy only sets which subsystem performs the restart. With the Never policy, the job controller performs the restart. With each attempt, the job controller increments the number of failures in the job status and create new pods. This means that with each failed attempt, the number of pods increases. With the OnFailure policy, kubelet performs the restart. Each attempt does not increment the number of failures in the job status. In addition, kubelet will retry failed jobs starting pods on the same nodes. 5.2.2. Creating jobs You create a job in OpenShift Container Platform by creating a job object. Procedure To create a job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 #... 1 Optional: Specify how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, defaults to 1 . 2 Optional: Specify how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 3 Optional: Specify the maximum duration the job can run. 4 Optional: Specify the number of retries for a job. This field defaults to six. 5 Specify the template for the pod the controller creates. 6 Specify the restart policy of the pod: Never . Do not restart the job. OnFailure . Restart the job only if it fails. Always . Always restart the job. For details on how OpenShift Container Platform uses restart policy with failed containers, see the Example States in the Kubernetes documentation. Create the job: USD oc create -f <file-name>.yaml Note You can also create and launch a job from a single command using oc create job . The following command creates and launches a job similar to the one specified in the example: USD oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' 5.2.3. Creating cron jobs You create a cron job in OpenShift Container Platform by creating a job object. Procedure To create a cron job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: "*/1 * * * *" 1 concurrencyPolicy: "Replace" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: "cronjobpi" spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 9 #... 1 Schedule for the job specified in cron format . In this example, the job will run every minute. 2 An optional concurrency policy, specifying how to treat concurrent jobs within a cron job. Only one of the following concurrent policies may be specified. If not specified, this defaults to allowing concurrent executions. Allow allows cron jobs to run concurrently. Forbid forbids concurrent runs, skipping the run if the has not finished yet. Replace cancels the currently running job and replaces it with a new one. 3 An optional deadline (in seconds) for starting the job if it misses its scheduled time for any reason. Missed jobs executions will be counted as failed ones. If not specified, there is no deadline. 4 An optional flag allowing the suspension of a cron job. If set to true , all subsequent executions will be suspended. 5 The number of successful finished jobs to retain (defaults to 3). 6 The number of failed finished jobs to retain (defaults to 1). 7 Job template. This is similar to the job example. 8 Sets a label for jobs spawned by this cron job. 9 The restart policy of the pod. This does not apply to the job controller. Note The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. Create the cron job: USD oc create -f <file-name>.yaml Note You can also create and launch a cron job from a single command using oc create cronjob . The following command creates and launches a cron job similar to the one specified in the example: USD oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)' With oc create cronjob , the --schedule option accepts schedules in cron format . | [
"kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #",
"oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #",
"oc adm new-project <name> --node-selector=\"\"",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #",
"oc create -f daemonset.yaml",
"oc get pods",
"hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m",
"oc describe pod/hello-daemonset-cx6md|grep Node",
"Node: openshift-node01.hostname.com/10.14.20.134",
"oc describe pod/hello-daemonset-e3md9|grep Node",
"Node: openshift-node02.hostname.com/10.14.20.137",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc delete cronjob/<cron_job_name>",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #",
"oc create -f <file-name>.yaml",
"oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9 #",
"oc create -f <file-name>.yaml",
"oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/nodes/using-jobs-and-daemonsets |
Chapter 5. Security considerations | Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard defining a set of security requirements for the use of cryptographic modules. This standard is mandated by law for US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation is now using FIPS validated cryptographic modules as delivered by Red Hat Enterprise Linux OS/CoreOS (RHCOS). The cryptography modules are currently being processed by Cryptographic Module Validation Program (CMVP) and their state can be seen at Modules in Process List . For more up-to-date information, see the knowledge base article . Note FIPS mode must be enabled on the OpenShift Container Platform, prior to installing OpenShift Data Foundation. OpenShift Container Platform must run on RHCOS nodes, as OpenShift Data Foundation deployment on RHEL 7 is not supported for this feature. For more information, see installing a cluster in FIPS mode and support for FIPS cryptography . 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Currently, HashiCorp Vault is the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Important KMS is required for Persistent Volume (PV) encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default. You can enable encryption for the cluster at the time of deployment. See the deployment guides for more information. Cluster wide encryption is supported in OpenShift Data Foundation 4.6 without Key Management System (KMS), while starting with OpenShift Data Foundation 4.7, it supports with and without KMS . Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Currently, HashiCorp Vault is the only supported KMS. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault KV secret engine, API version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/planning_your_deployment/security-considerations_rhodf |
Chapter 9. Scanning pod images with the Container Security Operator | Chapter 9. Scanning pod images with the Container Security Operator The Container Security Operator (CSO) is an addon for the Clair security scanner available on OpenShift Container Platform and other Kubernetes platforms. With the CSO, users can scan container images associated with active pods for known vulnerabilities. Note The CSO does not work without Red Hat Quay and Clair. The Container Security Operator (CSO) includes the following features: Watches containers associated with pods on either specified or all namespaces. Queries the container registry where the containers came from for vulnerability information, provided that an image's registry supports image scanning, such a a Red Hat Quay registry with Clair scanning. Exposes vulnerabilities through the ImageManifestVuln object in the Kubernetes API. Note To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page. 9.1. Downloading and running the Container Security Operator in OpenShift Container Platform Use the following procedure to download the Container Security Operator (CSO). Note In the following procedure, the CSO is installed in the marketplace-operators namespace. This allows the CSO to be used in all namespaces of your OpenShift Container Platform cluster. Procedure On the OpenShift Container Platform console page, select Operators OperatorHub and search for Container Security Operator . Select the Container Security Operator, then select Install to go to the Create Operator Subscription page. Check the settings (all namespaces and automatic approval strategy, by default), and select Subscribe . The Container Security appears after a few moments on the Installed Operators screen. Optional: you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then, run the following command to add the certificate to the CSO: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators Note You must restart the Operator pod for the new certificates to take effect. Navigate to Home Dashboards . A link to Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a security breakdown, as shown in the following image: Important The Container Security Operator currently provides broken links for Red Hat Security advisories. For example, the following link might be provided: https://access.redhat.com/errata/RHSA-2023:1842%20https://access.redhat.com/security/cve/CVE-2023-23916 . The %20 in the URL represents a space character, however it currently results in the combination of the two URLs into one incomplete URL, for example, https://access.redhat.com/errata/RHSA-2023:1842 and https://access.redhat.com/security/cve/CVE-2023-23916 . As a temporary workaround, you can copy each URL into your browser to navigate to the proper page. This is a known issue and will be fixed in a future version of Red Hat Quay. You can do one of two things at this point to follow up on any detected vulnerabilities: Select the link to the vulnerability. You are taken to the container registry, Red Hat Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry: Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces: After executing this procedure, you are made aware of what images are vulnerable, what you must do to fix those vulnerabilities, and every namespace that the image was run in. Knowing this, you can perform the following actions: Alert users who are running the image that they need to correct the vulnerability. Stop the images from running by deleting the deployment or the object that started the pod that the image is in. Note If you delete the pod, it might take a few minutes for the vulnerability to reset on the dashboard. 9.2. Query image vulnerabilities from the CLI Use the following procedure to query image vulnerabilities from the command line interface (CLI). Procedure Enter the following command to query for detected vulnerabilities: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s Optional. To display details for a particular vulnerability, identify a specific vulnerability and its namespace, and use the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace mynamespace sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... | [
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace mynamespace sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/container-security-operator-setup |
Metrics Store User Guide | Metrics Store User Guide Red Hat Virtualization 4.3 Using Metrics Store with Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract A comprehensive guide to understanding the metrics and logs collected by Metrics Store. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_user_guide/index |
8.7. Backup and Restoration of XFS File Systems | 8.7. Backup and Restoration of XFS File Systems XFS file system backup and restoration involves two utilities: xfsdump and xfsrestore . To backup or dump an XFS file system, use the xfsdump utility. Red Hat Enterprise Linux 6 supports backups to tape drives or regular file images, and also allows multiple dumps to be written to the same tape. The xfsdump utility also allows a dump to span multiple tapes, although only one dump can be written to a regular file. In addition, xfsdump supports incremental backups, and can exclude files from a backup using size, subtree, or inode flags to filter them. In order to support incremental backups, xfsdump uses dump levels to determine a base dump to which a specific dump is relative. The -l option specifies a dump level ( 0-9 ). To perform a full backup, perform a level 0 dump on the file system (that is, /path/to/filesystem ), as in: Note The -f option specifies a destination for a backup. For example, the /dev/st0 destination is normally used for tape drives. An xfsdump destination can be a tape drive, regular file, or remote tape device. In contrast, an incremental backup will only dump files that changed since the last level 0 dump. A level 1 dump is the first incremental dump after a full dump; the incremental dump would be level 2 , and so on, to a maximum of level 9 . So, to perform a level 1 dump to a tape drive: Conversely, the xfsrestore utility restores file systems from dumps produced by xfsdump . The xfsrestore utility has two modes: a default simple mode, and a cumulative mode. Specific dumps are identified by session ID or session label . As such, restoring a dump requires its corresponding session ID or label. To display the session ID and labels of all dumps (both full and incremental), use the -I option: This will provide output similar to the following: Example 8.4. Session ID and labels of all dumps Simple Mode for xfsrestore The simple mode allows users to restore an entire file system from a level 0 dump. After identifying a level 0 dump's session ID (that is, session-ID ), restore it fully to /path/to/destination using: Note The -f option specifies the location of the dump, while the -S or -L option specifies which specific dump to restore. The -S option is used to specify a session ID, while the -L option is used for session labels. The -I option displays both session labels and IDs for each dump. Cumulative Mode for xfsrestore The cumulative mode of xfsrestore allows file system restoration from a specific incremental backup, for example, level 1 to level 9 . To restore a file system from an incremental backup, simply add the -r option: Interactive Operation The xfsrestore utility also allows specific files from a dump to be extracted, added, or deleted. To use xfsrestore interactively, use the -i option, as in: xfsrestore -f /dev/st0 -i The interactive dialogue will begin after xfsrestore finishes reading the specified device. Available commands in this dialogue include cd , ls , add , delete , and extract ; for a complete list of commands, use help . For more information about dumping and restoring XFS file systems, refer to man xfsdump and man xfsrestore . | [
"xfsdump -l 0 -f /dev/ device /path/to/filesystem",
"xfsdump -l 1 -f /dev/st0 /path/to/filesystem",
"xfsrestore -I",
"file system 0: fs id: 45e9af35-efd2-4244-87bc-4762e476cbab session 0: mount point: bear-05:/mnt/test device: bear-05:/dev/sdb2 time: Fri Feb 26 16:55:21 2010 session label: \"my_dump_session_label\" session id: b74a3586-e52e-4a4a-8775-c3334fa8ea2c level: 0 resumed: NO subtree: NO streams: 1 stream 0: pathname: /mnt/test2/backup start: ino 0 offset 0 end: ino 1 offset 0 interrupted: NO media files: 1 media file 0: mfile index: 0 mfile type: data mfile size: 21016 mfile start: ino 0 offset 0 mfile end: ino 1 offset 0 media label: \"my_dump_media_label\" media id: 4a518062-2a8f-4f17-81fd-bb1eb2e3cb4f xfsrestore: Restore Status: SUCCESS",
"xfsrestore -f /dev/st0 -S session-ID /path/to/destination",
"xfsrestore -f /dev/st0 -S session-ID -r /path/to/destination"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/xfsbackuprestore |
Using the AMQ JMS Client | Using the AMQ JMS Client Red Hat AMQ 2020.Q4 For Use with AMQ Clients 2.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_client/index |
Chapter 8. Managing record sets | Chapter 8. Managing record sets Red Hat OpenStack (RHOSP) DNS service (designate) stores data about zones in record sets. Record sets consist of one or more DNS resource records. You can query a zone to list its record sets in addition to adding, modifying, and deleting them. The topics included in this section are: Section 8.1, "About records and record sets in the DNS service" Section 8.2, "Creating a record set" Section 8.3, "Updating a record set" Section 8.4, "Deleting a record set" 8.1. About records and record sets in the DNS service The Domain Name System (DNS) uses resource records to store zone data within namespaces. DNS records in the Red Hat OpenStack (RHOSP) DNS service (designate) are managed using record sets. Each DNS record contains the following attributes: Name - the string that indicates its location in the DNS namespace. Type - the set of letter codes that identifies how the record is used. For example, A identifies address records and CNAME identifies canonical name records. Class - the set of letter codes that specify the namespace for the record. Typically, this is IN for internet, though other namespaces do exist. TTL - (time to live) the duration, in seconds, that the record remains valid. Rdata - the data for the record, such as an IP address for an A record or another record name for a CNAME record. Each zone namespace must contain a start of authority (SOA) record and can have an authoritative name server (NS) record and a variety of other types of records. The SOA record indicates that this name server is the best source of information about the zone. The NS record identifies the name server that is authoritative for a zone. The SOA and NS records for a zone are readable, but cannot be modified. Besides the required SOA and NS records, three of the most common record types are address (A), canonical name (CNAME), and pointer (PTR) records. A records map hostnames to IP addresses. PTR records map IP addresses to hostnames. CNAME records identify the full hostname for aliases. A record set represents one or more DNS records with the same name and type, but potentially different data. For example, a record set named web.example.com , with a type of A , that contains the data 192.0.2.1 and 192.0.2.2 might reflect two web servers hosting web.example.com located at those two IP addresses. You must create record sets within a zone. If you delete a zone that contains record sets, those record sets within the zone are also deleted. Consider this output obtained by querying the example.com zone with the openstack recordset list -c name -c type -c records example.com command: In this example, the authoritative name server for the example.com. zone is ns1.example.net. , the NS record. To verify this, you can use the BIND dig tool to query the name server for the NS record: You can also verify the A record sets: 8.2. Creating a record set By default, any user can create Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are creating a record set. Procedure Source your credentials file. Example You create record sets by using the openstack recordset create command. Record sets require a zone, name, type, and data. Example Note The trailing dot ( . ) is required when using fully qualified domain names (FQDN). If you omit the trailing dot, the zone name is duplicated in the resulting record name, for example www.example.com.example.com. . In the earlier example, a user has created a zone named example.com. . Because the record set name www is not an FQDN, the DNS service prepends it to the zone name. You can achieve the same result by using the FQDN for the record set name argument: If you want to construct a TXT record set that exceeds the maximum length for a character string (255 characters), then you must split the string into multiple, smaller strings when you create the record set. In this example, a user creates a TXT record set ( _domainkey.example.com ) that contains one string of 410 characters by specifying two strings- each less than the 255 character maximum: You can supply the --record argument multiple times to create multiple records within a record set. A typical use for multiple --record arguments is round-robin DNS. Example Verification Run the list command to verify that the record set you created exists: Example Sample output Additional resources recordset create command in the Command Line Interface Reference recordset list command in the Command Line Interface Reference man page for dig 8.3. Updating a record set By default, any user can update Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are updating a record set. Procedure Source your credentials file. Example You modify record sets by using the openstack recordset set command. Example In this example, a user is updating the record set web.example.com. to contain two records: Note When updating a record set you can identify it by its ID or its name. If you use its name, you must use the fully qualified domain name (FQDN). Verification Run the list command to confirm your modifications. Example Sample output Additional resources recordset create command in the Command Line Interface Reference recordset list command in the Command Line Interface Reference 8.4. Deleting a record set By default, any user can delete Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are deleting a record set. Procedure Source your credentials file. Example You delete record sets by using the openstack recordset delete command. Example In this example, a user is deleting the record set web.example.com. from the example.com. zone: Verification Run the list command to confirm your deletions. Example Sample output Additional resources recordset delete command in the Command Line Interface Reference recordset list command in the Command Line Interface Reference | [
"+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com. 16200126 | | | | 16 3599 600 8640 0 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.1 | | | | 192.0.2.2 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+",
"dig @ns1.example.net example.com. -t NS +short ns1.example.net.",
"dig @ns1.example.net web.example.com. +short 192.0.2.2 192.0.2.1 dig @ns1.example.net www.example.com. +short 192.0.2.1",
"source ~/overcloudrc",
"openstack recordset create --type A --record 192.0.2.1 example.com. www",
"openstack recordset create --type A --record 192.0.2.1 example.com. www.example.com.",
"openstack recordset create --type TXT --record '\"210 characters string\" \"200 characters string\"' example.com. _domainkey",
"openstack recordset create --type A --record 192.0.2.1 --record 192.0.2.2 example.com. web",
"openstack recordset list -c name -c type -c records example.com.",
"+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.1 192.0.2.2 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+",
"source ~/overcloudrc",
"openstack recordset set example.com. web.example.com. --record 192.0.2.5 --record 192.0.2.6",
"openstack recordset list -c name -c type -c records example.com.",
"+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.5 192.0.2.6 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+",
"source ~/overcloudrc",
"openstack recordset delete example.com. web.example.com.",
"openstack recordset list -c name -c type -c records example.com.",
"+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/manage-record-sets_rhosp-dnsaas |
Chapter 5. The Basics of Managing the IdM Server and Services | Chapter 5. The Basics of Managing the IdM Server and Services This chapter describes the Identity Management command-line and UI tools that are available to manage the IdM server and services, including methods for authenticating to IdM. 5.1. Starting and Stopping the IdM Server A number of different services are installed together with an IdM server, including Directory Server, Certificate Authority (CA), DNS, Kerberos, and others. Use the ipactl utility to stop, start, or restart the entire IdM server along with all the installed services. To start the entire IdM server: To stop the entire IdM server: To restart the entire IdM server: If you only want to stop, start, or restart an individual service, use the systemctl utility, described in the System Administrator's Guide . For example, using systemctl to manage individual services is useful when customizing the Directory Server behavior: the configuration changes require restarting the Directory Server instance, but it is not necessary to restart all the IdM services. Important To restart multiple IdM domain services, Red Hat always recommends to use ipactl . Because of dependencies between the services installed with the IdM server, the order in which they are started and stopped is critical. The ipactl utility ensures that the services are started and stopped in the appropriate order. | [
"ipactl start",
"ipactl stop",
"ipactl restart"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/basic-usage |
Chapter 3. Approved Access | Chapter 3. Approved Access Red Hat Site Reliability Engineering (SRE) typically does not require elevated access to systems as part of normal operations to manage and support Red Hat OpenShift Service on AWS clusters. Elevated access gives SRE the access levels of a cluster-admin role. See cluster roles for more information. In the unlikely event that SRE needs elevated access to systems, you can use the Approved Access interface to review and approve or deny access to these systems. Elevated access requests to clusters on Red Hat OpenShift Service on AWS clusters and the corresponding cloud accounts can be created by SRE either in response to a customer-initiated support ticket or in response to alerts received by SRE as part of the standard incident response process. When Approved Access is enabled and an SRE creates an access request, cluster owners receive an email notification informing them of a new access request. The email notification contains a link allowing the cluster owner to quickly approve or deny the access request. You must respond in a timely manner otherwise there is a risk to your SLA for Red Hat OpenShift Service on AWS. If customers require additional users that are not the cluster owner to receive the email, they can add notification cluster contacts . Pending access requests are available in the Hybrid Cloud Console on the clusters list or Access Requests tab on the cluster overview for the specific cluster. Note Denying an access request requires you to complete the Justification field. In this case, SRE can not directly act on the resources related to the incident. Customers can still use the Customer Support to help investigate and resolve any issues. 3.1. Enabling Approved Access for ROSA clusters by submitting a support case Red Hat OpenShift Service on AWS Approved Access is not enabled by default. To enable Approved Access for your Red Hat OpenShift Service on AWS clusters, you should create a support ticket. Procedure Log in to the Customer Support page of the Red Hat Customer Portal. Click Get support . On the Cases tab of the Customer support page: Optional: Change the pre-filled account and owner details if needed. Select the Configuration category and click Continue . Enter the following information: In the Product field, select Red Hat OpenShift Service on AWS . In the Problem statement field, enter Enable ROSA Access Protection . Click See more options . Select OpenShift Cluster ID from the drop-down list. Fill the remaining mandatory fields in the form: What are you experiencing? What are you expecting to happen? Fill with Approved Access . Define the value or impact to you or the business. Fill with Approved Access . Click Continue . Select Severity as 4(Low) and click Continue . Preview the case details and click Submit . 3.2. Reviewing an access request from an email notification Cluster owners will receive an email notification when Red Hat Site Reliability Engineering (SRE) request access to their cluster with a link to review the request in the Hybrid Cloud Console. Procedure Click the link within the email to bring you to the Hybrid Cloud Console. In the Access Request Details dialog, click Approve or Deny under Decision . Note Denying an access request requires you to complete the Justification field. In this case, SRE can not directly act on the resources related to the incident. Customers can still use the Customer Support to help investigate and resolve any issues. Click Save . 3.3. Reviewing an access request from the Hybrid Cloud Console Review access requests for your Red Hat OpenShift Service on AWS clusters from the Hybrid Cloud Console. Procedure Navigate to OpenShift Cluster Manager and select Cluster List . Click the cluster name to review the Access Request . Select the Access Requests tab to list all states . Select Open under Actions for the Pending state. In the Access Request Details dialog, click Approve or Deny under Decision . Note Denying an access request requires you to complete the Justification field. In this case, SRE can not directly act on the resources related to the incident. Customers can still use the Customer Support to help investigate and resolve any issues. Click Save . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/support/approved-access |
8.3. arptables_jf | 8.3. arptables_jf 8.3.1. RHBA-2013:0843 - arptables_jf bug fix update Updated arptables_jf packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The arptables_jf utility controls the arpfilter packet filtering code in the Linux kernel. Bug Fixes BZ# 807315 Prior to this update, both the "mangle-hw-s" and "mangle-hw-d" options required the use of the "--arhln" option. However, even if the "--arhln" option was specified on the command line, the "arptables" command did not recognize it. As a consequence, it was not possible to use those two options successfully. These updated packages fix this bug and the "--arhln" option can now be used together with the mangle hardware options. BZ# 963209 When the "-x" command line option (exact values) was used along with the "-L" (List rules) option, the arptables utility did not list rules but issued an error message saying "-x" option is illegal with "-L". With this update, the arptables utility now uses the "-x" option when listing rules. Users of arptables_jf are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/arptables_jf |
Chapter 4. Basic configuration options of Shenandoah garbage collector | Chapter 4. Basic configuration options of Shenandoah garbage collector Shenandoah garbage collector (GC) has the following basic configuration options: -Xlog:gc Print the individual GC timing. -Xlog:gc+ergo Print the heuristics decisions, which might shed light on outliers, if any. -Xlog:gc+stats Print the summary table on Shenandoah internal timings at the end of the run. It is best to run this with logging enabled. This summary table conveys important information about GC performance. Heuristics logs are useful to figure out GC outliers. -XX:+AlwaysPreTouch Commit heap pages into memory and helps to reduce latency hiccups. -Xms and -Xmx Making the heap non-resizeable with -Xms = -Xmx reduces difficulties with heap management. Along with AlwaysPreTouch , the -Xms = -Xmx commit all memory on startup, which avoids difficulties when memory is finally used. -Xms also defines the low boundary for memory uncommit, so with -Xms = -Xmx all memory stays committed. If you want to configure Shenandoah for a lower footprint, then setting lower -Xms is recommended. You need to decide how low to set it to balance the commit/uncommit overhead versus memory footprint. In many cases, you can set -Xms arbitrarily low. -XX:+UseLargePages Enables hugetlbfs Linux support. -XX:+UseTransparentHugePages Enables huge pages transparently. With transparent huge pages, it is recommended to set /sys/kernel/mm/transparent_hugepage/enabled and /sys/kernel/mm/transparent_hugepage/defrag to madvise . When running with AlwaysPreTouch , it will also pay the defrag tool costs upfront at startup. -XX:+UseNUMA While Shenandoah does not support NUMA explicitly yet, it is a good idea to enable NUMA interleaving on multi-socket hosts. Coupled with AlwaysPreTouch , it provides better performance than the default out-of-the-box configuration. -XX:-UseBiasedLocking There is a tradeoff between uncontended (biased) locking throughput, and the safepoints JVM does to enable and disable them. For latency-oriented workloads, turn biased locking off. -XX:+DisableExplicitGC Invoking System.gc() from user code forces Shenandoah to perform additional GC cycle. It usually does not harm, as -XX:+ExplicitGCInvokesConcurrent gets enabled by default, which means the concurrent GC cycle would be invoked, not the STW Full GC. Revised on 2024-05-03 15:37:52 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_17/shenandoah-gc-basic-configuration |
5.13. Cleaning up Multipath Files on Package Removal | 5.13. Cleaning up Multipath Files on Package Removal If you should have occasion to remove the device-mapper-multipath rpm . file, note that this does not remove the /etc/multipath.conf , /etc/multipath/bindings , and /etc/multipath/wwids files. You may need to remove those files manually on subsequent installations of the device-mapper-multipath package. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/mpath-file-cleanup |
Chapter 80. Decision engine in Red Hat Decision Manager | Chapter 80. Decision engine in Red Hat Decision Manager The decision engine is the rules engine in Red Hat Decision Manager. The decision engine stores, processes, and evaluates data to execute the business rules or decision models that you define. The basic function of the decision engine is to match incoming data, or facts , to the conditions of rules and determine whether and how to execute the rules. The decision engine operates using the following basic components: Rules: Business rules or DMN decisions that you define. All rules must contain at a minimum the conditions that trigger the rule and the actions that the rule dictates. Facts: Data that enters or changes in the decision engine that the decision engine matches to rule conditions to execute applicable rules. Production memory: Location where rules are stored in the decision engine. Working memory: Location where facts are stored in the decision engine. Agenda: Location where activated rules are registered and sorted (if applicable) in preparation for execution. When a business user or an automated system adds or updates rule-related information in Red Hat Decision Manager, that information is inserted into the working memory of the decision engine in the form of one or more facts. The decision engine matches those facts to the conditions of the rules that are stored in the production memory to determine eligible rule executions. (This process of matching facts to rules is often referred to as pattern matching .) When rule conditions are met, the decision engine activates and registers rules in the agenda, where the decision engine then sorts prioritized or conflicting rules in preparation for execution. The following diagram illustrates these basic components of the decision engine: Figure 80.1. Overview of basic decision engine components For more details and examples of rule and fact behavior in the decision engine, see Chapter 82, Inference and truth maintenance in the decision engine . These core concepts can help you to better understand other more advanced components, processes, and sub-processes of the decision engine, and as a result, to design more effective business assets in Red Hat Decision Manager. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/decision-engine-con_decision-engine |
Chapter 29. Keyboard Configuration | Chapter 29. Keyboard Configuration The installation program allows users to configure a keyboard layout for their systems. To configure a different keyboard layout after installation, use the Keyboard Configuration Tool . To start the Keyboard Configuration Tool , select Applications (the main menu on the panel) => System Settings => Keyboard , or type the command system-config-keyboard at a shell prompt. Figure 29.1. Keyboard Configuration Select a keyboard layout from the list (for example, U.S. English ) and click OK . For changes to take effect, you should log out of your graphical desktop session and log back in. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/ch-keyboardconfig |
2.3. Running SystemTap Scripts | 2.3. Running SystemTap Scripts SystemTap scripts are run through the command stap . stap can run SystemTap scripts from standard input or from file. Running stap and staprun requires elevated privileges to the system. However, not all users can be granted root access just to run SystemTap. In some cases, for instance, a non-privileged user may need to run SystemTap instrumentation on their machine. To allow ordinary users to run SystemTap without root access, add them to both of these user groups: stapdev Members of this group can use stap to run SystemTap scripts, or staprun to run SystemTap instrumentation modules. Running stap involves compiling SystemTap scripts into kernel modules and loading them into the kernel. This requires elevated privileges to the system, which are granted to stapdev members. Unfortunately, such privileges also grant effective root access to stapdev members. As such, only grant stapdev group membership to users who can be trusted with root access. stapusr Members of this group can only use staprun to run SystemTap instrumentation modules. In addition, they can only run those modules from /lib/modules/ kernel_version /systemtap/ . Note that this directory must be owned only by the root user, and must only be writable by the root user. Note In order to run SystemTap scripts a user must be in both the stapdev and stapusr groups. Below is a list of commonly used stap options: -v Makes the output of the SystemTap session more verbose. This option (for example, stap -vvv script.stp ) can be repeated to provide more details on the script's execution. It is particularly useful if errors are encountered when running the script. This option is particularly useful if you encounter any errors in running the script. For more information about common SystemTap script errors, refer to Chapter 5, Understanding SystemTap Errors . -o filename Sends the standard output to file ( filename ). -S size , count Limit files to size megabytes and limit the number of files kept around to count . The file names will have a sequence number suffix. This option implements logrotate operations for SystemTap. When used with -o , the -S will limit the size of log files. -x process ID Sets the SystemTap handler function target() to the specified process ID. For more information about target() , refer to SystemTap Functions . -c command Sets the SystemTap handler function target() to the specified command. The full path to the specified command must be used; for example, instead of specifying cp , use /bin/cp (as in stap script -c /bin/cp ). For more information about target() , refer to SystemTap Functions . -e ' script ' Use script string rather than a file as input for systemtap translator. -F Use SystemTap's Flight recorder mode and make the script a background process. For more information about flight recorder mode, refer to Section 2.3.1, "SystemTap Flight Recorder Mode" . stap can also be instructed to run scripts from standard input using the switch - . To illustrate: Example 2.1. Running Scripts From Standard Input Example 2.1, "Running Scripts From Standard Input" instructs stap to run the script passed by echo to standard input. Any stap options to be used should be inserted before the - switch; for instance, to make the example in Example 2.1, "Running Scripts From Standard Input" more verbose, the command would be: echo "probe timer.s(1) {exit()}" | stap -v - For more information about stap , refer to man stap . To run SystemTap instrumentation (that is the kernel module built from SystemTap scripts during a cross-instrumentation), use staprun instead. For more information about staprun and cross-instrumentation, refer to Section 2.2, "Generating Instrumentation for Other Computers" . Note The stap options -v and -o also work for staprun . For more information about staprun , refer to man staprun . 2.3.1. SystemTap Flight Recorder Mode SystemTap's flight recorder mode allows a SystemTap script to be ran for long periods and just focus on recent output. The flight recorder mode (the -F option) limits the amount of output generated. There are two variations of the flight recorder mode: in-memory and file mode. In both cases the SystemTap script runs as a background process. 2.3.1.1. In-memory Flight Recorder When flight recorder mode (the -F option) is used without a file name, SystemTap uses a buffer in kernel memory to store the output of the script. , SystemTap instrumentation module loads and the probes start running, then instrumentation will detach and be put in the background. When the interesting event occurs, the instrumentation can be reattached and the recent output in the memory buffer and any continuing output can be seen. The following command starts a script using the flight recorder in-memory mode: Once the script starts, a message that provides the command to reconnect to the running script will appear: When the interesting event occurs, reattach to the currently running script and output the recent data in the memory buffer, then get the continuing output with the following command: By default, the kernel buffer is 1MB in size, but it can be increased with the -s option specifying the size in megabytes (rounded up to the power over 2) for the buffer. For example -s2 on the SystemTap command line would specify 2MB for the buffer. 2.3.1.2. File Flight Recorder The flight recorder mode can also store data to files. The number and size of the files kept is controlled by the -S option followed by two numerical arguments separated by a comma. The first argument is the maximum size in megabytes for the each output file. The second argument is the number of recent files to keep. The file name is specified by the -o option followed by the name. SystemTap adds a number suffix to the file name to indicate the order of the files. The following will start SystemTap in file flight recorder mode with the output going to files named /tmp/pfaults.log. [0-9]+ with each file 1MB or smaller and keeping latest two files: The number printed by the command is the process ID. Sending a SIGTERM to the process will shutdown the SystemTap script and stop the data collection. For example if the command listed the 7590 as the process ID, the following command would shutdown the systemtap script: Only the most recent two file generated by the script are kept and the older files are been removed. Thus, ls -sh /tmp/pfaults.log.* shows the only two files: One can look at the highest number file for the latest data, in this case /tmp/pfaults.log.6. | [
"echo \"probe timer.s(1) {exit()}\" | stap -",
"stap -F /usr/share/doc/systemtap- version /examples/io/iotime.stp",
"Disconnecting from systemtap module. To reconnect, type \"staprun -A stap_5dd0073edcb1f13f7565d8c343063e68_19556\"",
"staprun -A stap_5dd0073edcb1f13f7565d8c343063e68_19556",
"stap -F -o /tmp/pfaults.log -S 1,2 pfaults.stp",
"kill -s SIGTERM 7590",
"1020K /tmp/pfaults.log.5 44K /tmp/pfaults.log.6"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/using-usage |
Chapter 1. About Red Hat AMQ 7 | Chapter 1. About Red Hat AMQ 7 Red Hat AMQ provides fast, lightweight, and secure messaging for Internet-scale applications. AMQ Broker supports multiple protocols and fast message persistence. AMQ Interconnect leverages the AMQP protocol to distribute and scale your messaging resources across the network. AMQ Clients provides a suite of messaging APIs for multiple languages and platforms. Think of the AMQ components as tools inside a toolbox. They can be used together or separately to build and maintain your messaging application, and AMQP is the glue in the toolbox that binds them together. AMQ components share a common management console, so you can manage them from a single interface. Note Red Hat AMQ 7 includes AMQ Streams. It is based on Apache Kafka, and does not support AMQP. 1.1. Key features AMQ enables developers to build messaging applications that are fast, reliable, and easy to administer. Messaging at internet scale AMQ contains the tools to build advanced, multi-datacenter messaging networks. It can connect clients, brokers, and stand-alone services in a seamless messaging fabric. Top-tier security and performance AMQ offers modern SSL/TLS encryption and extensible SASL authentication. AMQ delivers fast, high-volume messaging and class-leading JMS performance. Broad platform and language support AMQ works with multiple languages and operating systems, so your diverse application components can communicate. AMQ supports C++, Java, JavaScript, Python, Ruby, and .NET applications, as well as Linux, Windows, and JVM-based environments. Focused on standards AMQ implements the Java JMS 1.1 and 2.0 API specifications. Its components support the ISO-standard AMQP 1.0 and MQTT messaging protocols, as well as STOMP and WebSocket. Centralized management With AMQ, you can administer all AMQ components from a single management interface. You can use JMX or the REST interface to manage servers programmatically. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/introducing_red_hat_amq_7/about |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.402/making-open-source-more-inclusive |
3.2. Starting the Virtual Machine Using the Run Once Option | 3.2. Starting the Virtual Machine Using the Run Once Option 3.2.1. Installing Windows on VirtIO-Optimized Hardware Install VirtIO-optimized disk and network device drivers during your Windows installation by attaching the virtio-win.vfd diskette to your virtual machine. These drivers provide a performance improvement over emulated device drivers. Use the Run Once option to attach the diskette in a one-off boot different from the Boot Options defined in the New Virtual Machine window. This procedure presumes that you added a Red Hat VirtIO network interface and a disk that uses the VirtIO interface to your virtual machine. Note The virtio-win.vfd diskette is placed automatically on ISO storage domains that are hosted on the Manager. You can upload it manually to a data storage domain. See Uploading Images to a Data Storage Domain in the Administration Guide for details. Installing VirtIO Drivers during Windows Installation Click Compute Virtual Machines and select a virtual machine. Click Run Run Once . Expand the Boot Options menu. Select the Attach Floppy check box, and select virtio-win.vfd from the drop-down list. Select the Attach CD check box, and select the required Windows ISO from the drop-down list. Move CD-ROM to the top of the Boot Sequence field. Configure the rest of your Run Once options as required. See Section A.2, "Explanation of Settings in the Run Once Window" for more details. Click OK . The Status of the virtual machine changes to Up , and the operating system installation begins. Open a console to the virtual machine if one does not open automatically. Windows installations include an option to load additional drivers early in the installation process. Use this option to load drivers from the virtio-win.vfd diskette that was attached to your virtual machine as A: . For each supported virtual machine architecture and Windows version, there is a folder on the disk containing optimized hardware device drivers. 3.2.2. Opening a Console to a Virtual Machine Use Remote Viewer to connect to a virtual machine. Connecting to Virtual Machines Install Remote Viewer if it is not already installed. See Section 1.4.1, "Installing Console Components" . Click Compute Virtual Machines and select a virtual machine. Click Console . If the connection protocol is set to SPICE, a console window will automatically open for the virtual machine. If the connection protocol is set to VNC, a console.vv file will be downloaded. Click on the file and a console window will automatically open for the virtual machine. Note You can configure the system to automatically connect to a virtual machine. See Section 2.2.4, "Automatically Connecting to a Virtual Machine" . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-starting_the_virtual_machine_using_the_run_once_option |
Chapter 3. Installing the high availability and RHOSP packages and agents | Chapter 3. Installing the high availability and RHOSP packages and agents Install the packages required for configuring a Red Hat High Availability cluster on Red Hat OpenStack Platform (RHOSP). You must install the packages on each of the nodes you will use as cluster members. Prerequisites A server group for the RHOSP instances to use as HA cluster nodes, configured as described in RHOSP server group configuration for HA instances An RHOSP instance for each HA cluster node The instances are members of a server group The instances are configured as nodes running RHEL 8.7 or later Procedure Enable the RHEL HA repositories and the RHOSP tools channel. Install the Red Hat High Availability Add-On software packages and the packages that are required for the RHOSP cluster resource agents and the RHOSP fence agents. Installing the pcs and pacemaker packages on each node creates the user hacluster , which is the pcs administration account. Create a password for user hacluster on all cluster nodes. Using the same password for all nodes simplifies cluster administration. If firewalld.service is installed, add the high-availability service to the RHEL firewall. Start the pcs service and enable it to start on boot. Verify that the pcs service is running. Edit the /etc/hosts file and add RHEL host names and internal IP addresses. For information about /etc/hosts , see the Red Hat Knowledgebase solution How should the /etc/hosts file be set up on RHEL cluster nodes? . Additional resources For further information about configuring and managing Red Hat high availability clusters, see Configuring and managing high availability clusters . | [
"subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms subscription-manager repos --enable=openstack-16-tools-for-rhel-8-x86_64-rpms",
"yum install pcs pacemaker python3-openstackclient python3-novaclient fence-agents-openstack",
"passwd hacluster",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability",
"systemctl start pcsd.service systemctl enable pcsd.service",
"systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5437 (pcsd) CGroup: /system.slice/pcsd.service └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface... Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface."
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/proc_installing-the-high-availability-and-rhosp-packages-and-agents_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform |
Chapter 1. Monitoring high availability services | Chapter 1. Monitoring high availability services Red Hat OpenStack Services on OpenShift (RHOSO) high availability (HA) uses Red Hat OpenShift Container Platform (RHOCP) operations to orchestrate failover and recovery deployment. When you plan your deployment, ensure that you review the considerations for different aspects of the environment, such as hardware assignments and network configuration. The following shared control plane services are required to implement HA: Galera RabbitMQ memcached These services run as pods, and they are managed by operators and monitored by RHOCP. You can use the RHOCP client command line interface ( oc ) to interact with the platform and retrieve information about the status of the RHOSO control plane services. For example, you can use the oc to complete the following actions: Monitor the startup and the state and availability of the RHOSO HA services. Investigate the pods of the RHOSO HA services. Investigate the operators of the RHOSO HA services. Describe the statefulset of the RHOSO HA services. Investigate the RHOSO HA services. Test the resilience of the RHOSO HA services. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/assembly_monitoring-high-availability-services |
Chapter 9. Deploying on OpenStack with rootVolume and etcd on local disk | Chapter 9. Deploying on OpenStack with rootVolume and etcd on local disk As a day 2 operation, you can resolve and prevent performance issues of your Red Hat OpenStack Platform (RHOSP) installation by moving etcd from a root volume (provided by OpenStack Cinder) to a dedicated ephemeral local disk. 9.1. Deploying RHOSP on local disk If you have an existing RHOSP cloud, you can move etcd from that cloud to a dedicated ephemeral local disk. Prerequisites You have an OpenStack cloud with a working Cinder. Your OpenStack cloud has at least 75 GB of available storage to accommodate 3 root volumes for the OpenShift control plane. The OpenStack cloud is deployed with Nova ephemeral storage that uses a local storage backend and not rbd . Procedure Create a Nova flavor for the control plane with at least 10 GB of ephemeral disk by running the following command, replacing the values for --ram , --disk , and <flavor_name> based on your environment: USD openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name> Deploy a cluster with root volumes for the control plane; for example: Example YAML file # ... controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3 # ... Deploy the cluster you created by running the following command: USD openshift-install create cluster --dir <installation_directory> 1 1 For <installation_directory> , specify the location of the customized ./install-config.yaml file that you previously created. Verify that the cluster you deployed is healthy before proceeding to the step by running the following command: USD oc wait clusteroperators --all --for=condition=Progressing=false 1 1 Ensures that the cluster operators are finished progressing and that the cluster is not deploying or updating. Create a file named 98-var-lib-etcd.yaml by using the following YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\x2dlabel-local\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service Requisite=var-lib-etcd.mount [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c "[ -n \"USD(restorecon -nv /var/lib/etcd)\" ]" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service 1 The etcd database must be mounted by the device, not a label, to ensure that systemd generates the device dependency used in this config to trigger filesystem creation. 2 Do not run if the file system dev/disk/by-label/local-etcd already exists. 3 Fails with an alert message if /dev/disk/by-label/ephemeral0 does not exist. 4 Migrates existing data to local etcd database. This config does so after /var/lib/etcd is mounted, but before CRI-O starts so etcd is not running yet. 5 Requires that etcd is mounted and does not contain a member directory, but the ostree does. 6 Cleans up any migration state. 7 Copies and moves in separate steps to ensure atomic creation of a complete member directory. 8 Performs a quick check of the mount point directory before performing a full recursive relabel. If restorecon in the file path /var/lib/etcd cannot rename the directory, the recursive rename is not performed. Warning After you apply the 98-var-lib-etcd.yaml file to the system, do not remove it. Removing this file will break etcd members and lead to system instability. If a rollback is necessary, modify the ControlPlaneMachineSet object to use a flavor that does not include ephemeral disks. This change regenerates the control plane nodes without using ephemeral disks for the etcd partition, which avoids issues related to the 98-var-lib-etcd.yaml file. It is safe to remove the 98-var-lib-etcd.yaml file only after the update to the ControlPlaneMachineSet object is complete and no control plane nodes are using ephemeral disks. Create the new MachineConfig object by running the following command: USD oc create -f 98-var-lib-etcd.yaml Note Moving the etcd database onto the local disk of each control plane machine takes time. Verify that the etcd databases has been transferred to the local disk of each control plane by running the following commands: Verify that the cluster is still updating by running the following command: USD oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master Verify that the cluster is ready by running the following command: USD oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s Verify that the cluster Operators are running in the cluster by running the following command: USD oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false 9.2. Additional resources Recommended etcd practices Overview of backup and restore options | [
"openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>",
"controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3",
"openshift-install create cluster --dir <installation_directory> 1",
"oc wait clusteroperators --all --for=condition=Progressing=false 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )\" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\\x2dlabel-local\\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi\" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service Requisite=var-lib-etcd.mount [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c \"[ -n \\\"USD(restorecon -nv /var/lib/etcd)\\\" ]\" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service",
"oc create -f 98-var-lib-etcd.yaml",
"oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master",
"oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/deploying-openstack-on-local-disk |
Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment | Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment Each broker Pod in an Operator-based deployment hosts its own instance of AMQ Management Console at port 8161. The following procedures describe how to connect to AMQ Management Console for a deployed broker. Prerequisites You created a broker deployment using the AMQ Broker Operator. For example, to learn how to use a sample CR to create a basic broker deployment, see Section 3.4.1, "Deploying a basic broker instance" . You enabled access to AMQ Management Console for the brokers in your deployment. For more information about enabling access to AMQ Management Console, see Section 4.8, "Enabling access to AMQ Management Console" . 5.1. Connecting to AMQ Management Console When you enable access to AMQ Management Console in the Custom Resource (CR) instance for your broker deployment, the Operator automatically creates a dedicated Service and Route for each broker Pod to provide access to AMQ Management Console. The default name of the automatically-created Service is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc . For example, my-broker-deployment-wconsj-0-svc . The default name of the automatically-created Route is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc-rte . For example, my-broker-deployment-wconsj-0-svc-rte . This procedure shows you how to access the console for a running broker Pod. Procedure In the OpenShift Container Platform web console, click Networking Routes . On the Routes page, identify the wconsj Route for the given broker Pod. For example, my-broker-deployment-wconsj-0-svc-rte . Under Location , click the link that corresponds to the Route. A new tab opens in your web browser. Click the Management Console link. The AMQ Management Console login page opens. Note Credentials are required to log in to AMQ Management Console only if the requireLogin property of the CR is set to true . This property specifies whether login credentials are required to log in to the broker and AMQ Management Console. By default, the requireLogin property is set to false . If requireLogin is set to false , you can log in to AMQ Management Console without supplying a valid username and password by entering any text when prompted for a username and password. If the requireLogin property is set to true , enter a username and password. You can enter the credentials for a preconfigured user that is available for connecting to the broker and AMQ Management Console. You can find these credentials in the adminUser and adminPassword properties if these properties are configured in the Custom Resource (CR) instance. It these properties are not configured in the CR, the Operator automatically generates the credentials. To obtain the automatically generated credentials, see Section 5.2, "Accessing AMQ Management Console login credentials" . If you want to log in as any other user, note that a user must belong to a security role specified for the hawtio.role system property to have the permissions required to log in to AMQ Management Console. The default role for the hawtio.role system property is admin , which the preconfigured user belongs to. 5.2. Accessing AMQ Management Console login credentials If you do not specify a value for adminUser and adminPassword in the Custom Resource (CR) instance used for your broker deployment, the Operator automatically generates these credentials and stores them in a secret. The default secret name is in the form <custom-resource-name> -credentials-secret , for example, my-broker-deployment-credentials-secret . Note Values for adminUser and adminPassword are required to log in to the management console only if the requireLogin parameter of the CR is set to true . If requireLogin is set to false , you can log in to the console without supplying a valid username password by entering any text when prompted for username and password. This procedure shows how to access the login credentials. Procedure See the complete list of secrets in your OpenShift project. From the OpenShift Container Platform web console, click Workload Secrets . From the command line: Open the appropriate secret to reveal the Base64-encoded console login credentials. From the OpenShift Container Platform web console, click the secret that includes your broker Custom Resource instance in its name. Click the YAML tab. From the command line: To decode a value in the secret, use a command such as the following: Additional resources To learn more about using AMQ Management Console to view and manage brokers, see Managing brokers using AMQ Management Console in Managing AMQ Broker . | [
"oc get secrets",
"oc edit secret <my-broker-deployment-credentials-secret>",
"echo 'dXNlcl9uYW1l' | base64 --decode console_admin"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/deploying_amq_broker_on_openshift/assembly-br-connecting-to-console-operator_broker-ocp |
Chapter 11. BlueStore | Chapter 11. BlueStore BlueStore is the back-end object store for the OSD daemons and puts objects directly on the block device. Important BlueStore provides a high-performance backend for OSD daemons in a production environment. By default, BlueStore is configured to be self-tuning. If you determine that your environment performs better with BlueStore tuned manually, please contact Red Hat support and share the details of your configuration to help us improve the auto-tuning capability. Red Hat looks forward to your feedback and appreciates your recommendations. 11.1. Ceph BlueStore The following are some of the main features of using BlueStore: Direct management of storage devices BlueStore consumes raw block devices or partitions. This avoids any intervening layers of abstraction, such as local file systems like XFS, that might limit performance or add complexity. Metadata management with RocksDB BlueStore uses the RocksDB key-value database to manage internal metadata, such as the mapping from object names to block locations on a disk. Full data and metadata checksumming By default all data and metadata written to BlueStore is protected by one or more checksums. No data or metadata are read from disk or returned to the user without verification. Inline compression Data can be optionally compressed before being written to a disk. Efficient copy-on-write The Ceph Block Device and Ceph File System snapshots rely on a copy-on-write clone mechanism that is implemented efficiently in BlueStore. This results in efficient I/O both for regular snapshots and for erasure coded pools which rely on cloning to implement efficient two-phase commits. No large double-writes BlueStore first writes any new data to unallocated space on a block device, and then commits a RocksDB transaction that updates the object metadata to reference the new region of the disk. Only when the write operation is below a configurable size threshold, it falls back to a write-ahead journaling scheme. Multi-device support BlueStore can use multiple block devices for storing different data. For example: Hard Disk Drive (HDD) for the data, Solid-state Drive (SSD) for metadata, Non-volatile Memory (NVM) or Non-volatile random-access memory (NVRAM) or persistent memory for the RocksDB write-ahead log (WAL). See Ceph BlueStore devices for details. Efficient block device usage Because BlueStore does not use any file system, it minimizes the need to clear the storage device cache. Allocation metadata Allocation metadata is no longer using the standalone objects in RocksDB as the allocation information can be deduced from the aggregate allocation state of all onodes in the system which are stored in the RocksDB already. BlueStore V3 code skips the RocksDB updates on allocation time and performs a full destage of the allocator object with all the OSD allocation state in a single step during umount . This results in a 25% increase in IOPS and reduced latency in small random-write workloads; however, it prolongs the recovery time, usually by a few extra minutes, in failure cases where an umount is not called since you need to iterate over all onodes to recreate the allocation metadata. Cache age binning Red Hat Ceph Storage associates items in the different caches with "age bins", which gives a view of the relative ages of all the cache items. 11.2. Ceph BlueStore devices BlueStore manages either one, two, or three storage devices in the backend. Primary WAL DB In the simplest case, BlueStore consumes a single primary storage device. The storage device is normally used as a whole, occupying the full device that is managed by BlueStore directly. The primary device is identified by a block symlink in the data directory. The data directory is a tmpfs mount which gets populated with all the common OSD files that hold information about the OSD, like the identifier, which cluster it belongs to, and its private keyring. The storage device is partitioned into two parts that contain: OSD metadata : A small partition formatted with XFS that contains basic metadata for the OSD. This data directory includes information about the OSD, such as its identifier, which cluster it belongs to, and its private keyring. Data : A large partition occupying the rest of the device that is managed directly by BlueStore and that contains all of the OSD data. This primary device is identified by a block symbolic link in the data directory. You can also use two additional devices: A WAL (write-ahead-log) device : A device that stores BlueStore internal journal or write-ahead log. It is identified by the block.wal symbolic link in the data directory. Consider using a WAL device only if the device is faster than the primary device. For example, when the WAL device uses an SSD disk and the primary device uses an HDD disk. A DB device : A device that stores BlueStore internal metadata. The embedded RocksDB database puts as much metadata as it can on the DB device instead of on the primary device to improve performance. If the DB device is full, it starts adding metadata to the primary device. Consider using a DB device only if the device is faster than the primary device. Warning If you have only less than a gigabyte storage available on fast devices, Red Hat recommends using it as a WAL device. If you have more fast devices available, consider using it as a DB device. The BlueStore journal is always placed on the fastest device, so using a DB device provides the same benefit that the WAL device while also allows for storing additional metadata. 11.3. Ceph BlueStore caching The BlueStore cache is a collection of buffers that, depending on configuration, can be populated with data as the OSD daemon does reading from or writing to the disk. By default in Red Hat Ceph Storage, BlueStore will cache on reads, but not writes. This is because the bluestore_default_buffered_write option is set to false to avoid potential overhead associated with cache eviction. If the bluestore_default_buffered_write option is set to true , data is written to the buffer first, and then committed to disk. Afterwards, a write acknowledgement is sent to the client, allowing subsequent reads faster access to the data already in cache, until that data is evicted. Read-heavy workloads will not see an immediate benefit from BlueStore caching. As more reading is done, the cache will grow over time and subsequent reads will see an improvement in performance. How fast the cache populates depends on the BlueStore block and database disk type, and the client's workload requirements. Important Please contact Red Hat support before enabling the bluestore_default_buffered_write option. Cache age binning Red Hat Ceph Storage associates items in the different caches with "age bins", which gives a view of the relative ages of all the cache items. For example, when there are old onode entries sitting in the BlueStore onode cache, a hot read workload occurs against a single large object. The priority cache for that OSD sorts the older onode entries into a lower priority level than the buffer cache data for the hot object. Although Ceph might, in general, heavily favor onodes at a given priority level, in this hot workload scenario, older onodes might be assigned a lower priority level than the hot workload data, so that the buffer data memory request is fulfilled first. 11.4. Sizing considerations for Ceph BlueStore When mixing traditional and solid state drives using BlueStore OSDs, it is important to size the RocksDB logical volume ( block.db ) appropriately. Red Hat recommends that the RocksDB logical volume be no less than 4% of the block size with object, file and mixed workloads. Red Hat supports 1% of the BlueStore block size with RocksDB and OpenStack block workloads. For example, if the block size is 1 TB for an object workload, then at a minimum, create a 40 GB RocksDB logical volume. When not mixing drive types, there is no requirement to have a separate RocksDB logical volume. BlueStore will automatically manage the sizing of RocksDB. BlueStore's cache memory is used for the key-value pair metadata for RocksDB, BlueStore metadata, and object data. Note The BlueStore cache memory values are in addition to the memory footprint already being consumed by the OSD. 11.5. Tuning Ceph BlueStore using bluestore_min_alloc_size parameter This procedure is for new or freshly deployed OSDs. In BlueStore, the raw partition is allocated and managed in chunks of bluestore_min_alloc_size . By default, bluestore_min_alloc_size is 4096 , equivalent to 4 KiB for HDDs and SSDs. The unwritten area in each chunk is filled with zeroes when it is written to the raw partition. This can lead to wasted unused space when not properly sized for your workload, for example when writing small objects. It is best practice to set bluestore_min_alloc_size to match the smallest write so this write amplification penalty can be avoided. Important Changing the value of bluestore_min_alloc_size is not recommended. For any assistance, contact Red Hat support . Note The settings bluestore_min_alloc_size_ssd and bluestore_min_alloc_size_hdd are specific to SSDs and HDDs, respectively, but setting them is not necessary because setting bluestore_min_alloc_size overrides them. Prerequisites A running Red Hat Ceph Storage cluster. Ceph monitors and managers are deployed in the cluster. Servers or nodes that can be freshly provisioned as OSD nodes The admin keyring for the Ceph Monitor node, if you are redeploying an existing Ceph OSD node. Procedure On the bootstrapped node, change the value of bluestore_min_alloc_size parameter: Syntax Example You can see bluestore_min_alloc_size is set to 8192 bytes, which is equivalent to 8 KiB. Note The selected values should be power of 2 aligned. Restart the OSD's service. Syntax Example Verification Verify the setting using the ceph daemon command: Syntax Example Additional Resources For OSD removal and addition, see the Management of OSDs using the Ceph Orchestrator chapter in the Red Hat Ceph Storage Operations Guide and follow the links. For already deployed OSDs, you cannot modify the bluestore_min_alloc_size parameter so you have to remove the OSDs and freshly deploy them again. 11.6. Resharding the RocksDB database using the BlueStore admin tool You can reshard the database with the BlueStore admin tool. It transforms BlueStore's RocksDB database from one shape to another into several column families without redeploying the OSDs. Column families have the same features as the whole database, but allows users to operate on smaller data sets and apply different options. It leverages the different expected lifetime of keys stored. The keys are moved during the transformation without creating new keys or deleting existing keys. There are two ways to reshard the OSD: Use the rocksdb-resharding.yml playbook. Manually reshard the OSDs. Prerequisites A running Red Hat Ceph Storage cluster. The object store configured as BlueStore. OSD nodes deployed on the hosts. Root level access to the all the hosts. The ceph-common and cephadm packages installed on all the hosts. 11.6.1. Use the rocksdb-resharding.yml playbook As a root user, on the administration node, navigate to the cephadm folder where the playbook is installed: Example Run the playbook: Syntax Example Verify that the resharding is complete. Stop the OSD that is resharded: Example Enter the OSD container: Example Check for resharding: Example Start the OSD: Example 11.6.2. Manually resharding the OSDs Log into the cephadm shell: Example Fetch the OSD_ID and the host details from the administration node: Example Log into the respective host as a root user and stop the OSD: Syntax Example Enter into the stopped OSD daemon container: Syntax Example Log into the cephadm shell and check the file system consistency: Syntax Example Check the sharding status of the OSD node: Syntax Example Run the ceph-bluestore-tool command to reshard. Red Hat recommends to use the parameters as given in the command: Syntax Example To check the sharding status of the OSD node, run the show-sharding command: Syntax Example Exit from the cephadm shell: Log into the respective host as a root user and start the OSD: Syntax Example Additional Resources See the Red Hat Ceph Storage Installation Guide for more information. 11.7. The BlueStore fragmentation tool As a storage administrator, you will want to periodically check the fragmentation level of your BlueStore OSDs. You can check fragmentation levels with one simple command for offline or online OSDs. 11.7.1. What is the BlueStore fragmentation tool? For BlueStore OSDs, the free space gets fragmented over time on the underlying storage device. Some fragmentation is normal, but when there is excessive fragmentation this causes poor performance. The BlueStore fragmentation tool generates a score on the fragmentation level of the BlueStore OSD. This fragmentation score is given as a range, 0 through 1. A score of 0 means no fragmentation, and a score of 1 means severe fragmentation. Table 11.1. Fragmentation scores' meaning Score Fragmentation Amount 0.0 - 0.4 None to tiny fragmentation. 0.4 - 0.7 Small and acceptable fragmentation. 0.7 - 0.9 Considerable, but safe fragmentation. 0.9 - 1.0 Severe fragmentation and that causes performance issues. Important If you have severe fragmentation, and need some help in resolving the issue, contact Red Hat Support . 11.7.2. Checking for fragmentation Checking the fragmentation level of BlueStore OSDs can be done either online or offline. Prerequisites A running Red Hat Ceph Storage cluster. BlueStore OSDs. Online BlueStore fragmentation score Inspect a running BlueStore OSD process: For a simple report, run the following command: Syntax Example For a more detailed report, run the following command: Syntax Example Offline BlueStore fragmentation score Reshard to check the offline BlueStore OSD. Syntax Example Inspect the non-running BlueStore OSD process. For a simple report, run the following command: Syntax Example For a more detailed report, run the following command: Syntax Example Additional Resources See the BlueStore Fragmentation Tool for details on the fragmentation score. See the Resharding the RocskDB database using the BlueStore admin tool for details on resharding. 11.8. Ceph BlueStore BlueFS BlueStore block database stores metadata as key-value pairs in a RocksDB database. The block database resides on a small BlueFS partition on the storage device. BlueFS is a minimal file system that is designed to hold the RocksDB files. BlueFS files Following are the three types of files that RocksDB produces: Control files, for example CURRENT , IDENTITY , and MANIFEST-000011 . DB table files, for example 004112.sst . Write ahead logs, for example 000038.log . Additionally, there is an internal, hidden file that serves as BlueFS replay log, ino 1 , that works as directory structure, file mapping, and operations log. Fallback hierarchy With BlueFS it is possible to put any file on any device. Parts of file can even reside on different devices, that is WAL, DB, and SLOW. There is an order to where BlueFS puts files. File is put to secondary storage only when primary storage is exhausted, and tertiary only when secondary is exhausted. The order for the specific files is: Write ahead logs: WAL, DB, SLOW Replay log ino 1 : DB, SLOW Control and DB files: DB, SLOW Control and DB file order when running out of space: SLOW Important There is an exception to control and DB file order. When RocksDB detects that you are running out of space on DB file, it directly notifies you to put file to SLOW device. 11.8.1. Viewing the bluefs_buffered_io setting As a storage administrator, you can view the current setting for the bluefs_buffered_io parameter. The option bluefs_buffered_io is set to True by default for Red Hat Ceph Storage. This option enable BlueFS to perform buffered reads in some cases, and enables the kernel page cache to act as a secondary cache for reads like RocksDB block reads. Important Changing the value of bluefs_buffered_io is not recommended. Before changing the bluefs_buffered_io parameter, contact your Red Hat Support account team. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Log into the Cephadm shell: Example You can view the current value of the bluefs_buffered_io parameter in three different ways: Method 1 View the value stored in the configuration database: Example Method 2 View the value stored in the configuration database for a specific OSD: Syntax Example Method 3 View the running value for an OSD where the running value is different from the value stored in the configuration database: Syntax Example 11.8.2. Viewing Ceph BlueFS statistics for Ceph OSDs View the BluesFS related information about collocated and non-collocated Ceph OSDs with the bluefs stats command. Prerequisites A running Red Hat Ceph Storage cluster. The object store configured as BlueStore. Root-level access to the OSD node. Procedure Log into the Cephadm shell: Example View the BlueStore OSD statistics: Syntax Example for collocated OSDs Example for non-collocated OSDs where: 0 : This refers to dedicated WAL device, that is block.wal . 1 : This refers to dedicated DB device, that is block.db . 2 : This refers to main block device, that is block or slow . device size : It represents an actual size of the device. using : It represents total usage. It is not restricted to BlueFS. Note DB and WAL devices are used only by BlueFS. For main device, usage from stored BlueStore data is also included. In the above example, 2.3 MiB is the data from BlueStore. wal_total , db_total , slow_total : These values reiterate the device values above. db_avail : This value represents how many bytes can be taken from SLOW device if necessary. Usage matrix The rows WAL , DB , SLOW : Describe where specific file was intended to be put. The row LOG : Describes the BlueFS replay log ino 1 . The columns WAL , DB , SLOW : Describe where data is actually put. The values are in allocation units. WAL and DB have bigger allocation units for performance reasons. The columns * / * : Relate to virtual devices new-db and new-wal that are used for ceph-bluestore-tool . It should always show 0 B . The column REAL : Shows actual usage in bytes. The column FILES : Shows count of files. MAXIMUMS : this table captures the maximum value of each entry from the usage matrix. Additional Resources See Ceph BlueStore BlueFS for more information about BlueFS files. See Ceph BlueStore devices for more information about BlueStore devices. | [
"ceph config set osd. OSD_ID bluestore_min_alloc_size_DEVICE_NAME_ VALUE",
"ceph config set osd.4 bluestore_min_alloc_size_hdd 8192",
"systemctl restart SERVICE_ID",
"systemctl restart [email protected]",
"ceph daemon osd. OSD_ID config get bluestore_min_alloc_size__DEVICE_",
"ceph daemon osd.4 config get bluestore_min_alloc_size_hdd ceph daemon osd.4 config get bluestore_min_alloc_size { \"bluestore_min_alloc_size\": \"8192\" }",
"cd /usr/share/cephadm-ansible",
"ansible-playbook -i hosts rocksdb-resharding.yml -e osd_id= OSD_ID -e admin_node= HOST_NAME",
"ansible-playbook -i hosts rocksdb-resharding.yml -e osd_id=7 -e admin_node=host03 ............ TASK [stop the osd] *********************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:18 +0000 (0:00:00.037) 0:00:03.864 **** changed: [localhost -> host03] TASK [set_fact ceph_cmd] ****************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:32 +0000 (0:00:14.128) 0:00:17.992 **** ok: [localhost -> host03] TASK [check fs consistency with fsck before resharding] *********************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:32 +0000 (0:00:00.041) 0:00:18.034 **** ok: [localhost -> host03] TASK [show current sharding] ************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:43 +0000 (0:00:11.053) 0:00:29.088 **** ok: [localhost -> host03] TASK [reshard] **************************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:45 +0000 (0:00:01.446) 0:00:30.534 **** ok: [localhost -> host03] TASK [check fs consistency with fsck after resharding] ************************************************************************************************************************************************************ Wednesday 29 November 2023 11:25:46 +0000 (0:00:01.479) 0:00:32.014 **** ok: [localhost -> host03] TASK [restart the osd] ******************************************************************************************************************************************************************************************** Wednesday 29 November 2023 11:25:57 +0000 (0:00:10.699) 0:00:42.714 **** changed: [localhost -> host03]",
"ceph orch daemon stop osd.7",
"cephadm shell --name osd.7",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-7/ show-sharding m(3) p(3,0-12) O(3,0-13) L P",
"ceph orch daemon start osd.7",
"cephadm shell",
"ceph orch ps",
"cephadm unit --name OSD_ID stop",
"cephadm unit --name osd.0 stop",
"cephadm shell --name OSD_ID",
"cephadm shell --name osd.0",
"ceph-bluestore-tool --path/var/lib/ceph/osd/ceph- OSD_ID / fsck",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-0/ fsck fsck success",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- OSD_ID / show-sharding",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ show-sharding m(3) p(3,0-12) O(3,0-13) L P",
"ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph- OSD_ID / --sharding=\"m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P\" reshard",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ --sharding=\"m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P\" reshard reshard success",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- OSD_ID / show-sharding",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-6/ show-sharding m(3) p(3,0-12) O(3,0-13)=block_cache={type=binned_lru} L P",
"exit",
"cephadm unit --name OSD_ID start",
"cephadm unit --name osd.0 start",
"ceph daemon OSD_ID bluestore allocator score block",
"ceph daemon osd.123 bluestore allocator score block",
"ceph daemon OSD_ID bluestore allocator dump block",
"ceph daemon osd.123 bluestore allocator dump block",
"cephadm shell --name osd. ID",
"cephadm shell --name osd.2 Inferring fsid 110bad0a-bc57-11ee-8138-fa163eb9ffc2 Inferring config /var/lib/ceph/110bad0a-bc57-11ee-8138-fa163eb9ffc2/osd.2/config Using ceph image with id `17334f841482` and tag `ceph-6-rhel-9-containers-candidate-59483-20240301201929` created on 2024-03-01 20:22:41 +0000 UTC registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:09fc3e5baf198614d70669a106eb87dbebee16d4e91484375778d4adbccadacd",
"ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-score",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score",
"ceph-bluestore-tool --path PATH_TO_OSD_DATA_DIRECTORY --allocator block free-dump block: { \"fragmentation_rating\": 0.018290238194701977 }",
"ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump block: { \"capacity\": 21470642176, \"alloc_unit\": 4096, \"alloc_type\": \"hybrid\", \"alloc_name\": \"block\", \"extents\": [ { \"offset\": \"0x370000\", \"length\": \"0x20000\" }, { \"offset\": \"0x3a0000\", \"length\": \"0x10000\" }, { \"offset\": \"0x3f0000\", \"length\": \"0x20000\" }, { \"offset\": \"0x460000\", \"length\": \"0x10000\" },",
"cephadm shell",
"ceph config get osd bluefs_buffered_io",
"ceph config get OSD_ID bluefs_buffered_io",
"ceph config get osd.2 bluefs_buffered_io",
"ceph config show OSD_ID bluefs_buffered_io",
"ceph config show osd.3 bluefs_buffered_io",
"cephadm shell",
"ceph daemon osd. OSD_ID bluefs stats",
"ceph daemon osd.1 bluefs stats 1 : device size 0x3bfc00000 : using 0x1a428000(420 MiB) wal_total:0, db_total:15296836403, slow_total:0",
"ceph daemon osd.1 bluefs stats 0 : 1 : device size 0x1dfbfe000 : using 0x1100000(17 MiB) 2 : device size 0x27fc00000 : using 0x248000(2.3 MiB) RocksDBBlueFSVolumeSelector: wal_total:0, db_total:7646425907, slow_total:10196562739, db_avail:935539507 Usage matrix: DEV/LEV WAL DB SLOW * * REAL FILES LOG 0 B 4 MiB 0 B 0 B 0 B 756 KiB 1 WAL 0 B 4 MiB 0 B 0 B 0 B 3.3 MiB 1 DB 0 B 9 MiB 0 B 0 B 0 B 76 KiB 10 SLOW 0 B 0 B 0 B 0 B 0 B 0 B 0 TOTALS 0 B 17 MiB 0 B 0 B 0 B 0 B 12 MAXIMUMS: LOG 0 B 4 MiB 0 B 0 B 0 B 756 KiB WAL 0 B 4 MiB 0 B 0 B 0 B 3.3 MiB DB 0 B 11 MiB 0 B 0 B 0 B 112 KiB SLOW 0 B 0 B 0 B 0 B 0 B 0 B TOTALS 0 B 17 MiB 0 B 0 B 0 B 0 B"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/bluestore |
Part IV. Appendices | Part IV. Appendices Tools and techniques to help identify, analyze, and address potential problems. It also covers best practices for reporting bugs, ensuring that issues are clearly communicated for prompt resolution. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/appendices |
Installing on-premise with Assisted Installer | Installing on-premise with Assisted Installer OpenShift Container Platform 4.16 Installing OpenShift Container Platform on-premise with the Assisted Installer Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on-premise_with_assisted_installer/index |
1.4. Pacemaker Architecture Components | 1.4. Pacemaker Architecture Components A cluster configured with Pacemaker comprises separate component daemons that monitor cluster membership, scripts that manage the services, and resource management subsystems that monitor the disparate resources. The following components form the Pacemaker architecture: Cluster Information Base (CIB) The Pacemaker information daemon, which uses XML internally to distribute and synchronize current configuration and status information from the Designated Coordinator (DC) - a node assigned by Pacemaker to store and distribute cluster state and actions by means of the CIB - to all other cluster nodes. Cluster Resource Management Daemon (CRMd) Pacemaker cluster resource actions are routed through this daemon. Resources managed by CRMd can be queried by client systems, moved, instantiated, and changed when needed. Each cluster node also includes a local resource manager daemon (LRMd) that acts as an interface between CRMd and resources. LRMd passes commands from CRMd to agents, such as starting and stopping and relaying status information. Shoot the Other Node in the Head (STONITH) Often deployed in conjunction with a power switch, STONITH acts as a cluster resource in Pacemaker that processes fence requests, forcefully powering down nodes and removing them from the cluster to ensure data integrity. STONITH is configured in CIB and can be monitored as a normal cluster resource. corosync corosync is the component - and a daemon of the same name - that serves the core membership and member-communication needs for high availability clusters. It is required for the High Availability Add-On to function. In addition to those membership and messaging functions, corosync also: Manages quorum rules and determination. Provides messaging capabilities for applications that coordinate or operate across multiple members of the cluster and thus must communicate stateful or other information between instances. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-pacemakerarchitecture-haao |
Installing on OpenStack | Installing on OpenStack OpenShift Container Platform 4.18 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/index |
Red Hat JBoss Web Server 6.0 Service Pack 1 Release Notes | Red Hat JBoss Web Server 6.0 Service Pack 1 Release Notes Red Hat JBoss Web Server 6.0 For Use with the Red Hat JBoss Web Server 6.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/index |
Backup and restore | Backup and restore OpenShift Container Platform 4.9 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/backup_and_restore/index |
Chapter 6. Configuring the discovery image | Chapter 6. Configuring the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can use Ignition to customize the discovery image. Note Modifications to the discovery image will not persist in the system. 6.1. Creating an Ignition configuration file Ignition is a low-level system configuration utility, which is part of the temporary initial root filesystem, the initramfs . When Ignition runs on the first boot, it finds configuration data in the Ignition configuration file and applies it to the host before switch_root is called to pivot to the host's root filesystem. Ignition uses a JSON configuration specification file to represent the set of changes that occur on the first boot. Important Ignition versions newer than 3.2 are not supported, and will raise an error. Procedure Create an Ignition file and specify the configuration specification version: USD vim ~/ignition.conf { "ignition": { "version": "3.1.0" } } Add configuration data to the Ignition file. For example, add a password to the core user. Generate a password hash: USD openssl passwd -6 Add the generated password hash to the core user: { "ignition": { "version": "3.1.0" }, "passwd": { "users": [ { "name": "core", "passwordHash": "USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1" } ] } } Save the Ignition file and export it to the IGNITION_FILE variable: USD export IGNITION_FILE=~/ignition.conf 6.2. Modifying the discovery image with Ignition Once you create an Ignition configuration file, you can modify the discovery image by patching the infrastructure environment using the Assisted Installer API. Prerequisites If you used the web console to create the cluster, you have set up the API authentication. You have an infrastructure environment and you have exported the infrastructure environment id to the INFRA_ENV_ID variable. You have a valid Ignition file and have exported the file name as USDIGNITION_FILE . Procedure Create an ignition_config_override JSON object and redirect it to a file: USD jq -n \ --arg IGNITION "USD(jq -c . USDIGNITION_FILE)" \ '{ignition_config_override: USDIGNITION}' \ > discovery_ignition.json Refresh the API token: USD source refresh-token Patch the infrastructure environment: USD curl \ --header "Authorization: Bearer USDAPI_TOKEN" \ --header "Content-Type: application/json" \ -XPATCH \ -d @discovery_ignition.json \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq The ignition_config_override object references the Ignition file. Download the updated discovery image. | [
"vim ~/ignition.conf",
"{ \"ignition\": { \"version\": \"3.1.0\" } }",
"openssl passwd -6",
"{ \"ignition\": { \"version\": \"3.1.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"passwordHash\": \"USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1\" } ] } }",
"export IGNITION_FILE=~/ignition.conf",
"jq -n --arg IGNITION \"USD(jq -c . USDIGNITION_FILE)\" '{ignition_config_override: USDIGNITION}' > discovery_ignition.json",
"source refresh-token",
"curl --header \"Authorization: Bearer USDAPI_TOKEN\" --header \"Content-Type: application/json\" -XPATCH -d @discovery_ignition.json https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_configuring-the-discovery-image |
Chapter 134. Hazelcast List Component | Chapter 134. Hazelcast List Component Available as of Camel version 2.7 The Hazelcast List component is one of Camel Hazelcast Components which allows you to access Hazelcast distributed list. 134.1. Options The Hazelcast List component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast List endpoint is configured using URI syntax: with the following path and query parameters: 134.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 134.1.2. Query Parameters (16 parameters): Name Description Default Type defaultOperation (common) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (common) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (common) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 134.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-list.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-list.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-list.enabled Enable hazelcast-list component true Boolean camel.component.hazelcast-list.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-list.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-list.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 134.3. List producer - to("hazelcast-list:foo") The list producer provides 7 operations: * add * addAll * set * get * removevalue * removeAll * clear 134.3.1. Sample for add : from("direct:add") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); 134.3.2. Sample for get : from("direct:get") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX) .to("seda:out"); 134.3.3. Sample for setvalue : from("direct:set") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE)) .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); 134.3.4. Sample for removevalue : from("direct:removevalue") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) .toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX); Note that CamelHazelcastObjectIndex header is used for indexing purpose. 134.4. List consumer - from("hazelcast-list:foo") The list consumer provides 2 operations: * add * remove fromF("hazelcast-%smm", HazelcastConstants.LIST_PREFIX) .log("object...") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log("...added") .to("mock:added") .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) .log("...removed") .to("mock:removed") .otherwise() .log("fail!"); | [
"hazelcast-list:cacheName",
"from(\"direct:add\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD)) .toF(\"hazelcast-%sbar\", HazelcastConstants.LIST_PREFIX);",
"from(\"direct:get\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET)) .toF(\"hazelcast-%sbar\", HazelcastConstants.LIST_PREFIX) .to(\"seda:out\");",
"from(\"direct:set\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE)) .toF(\"hazelcast-%sbar\", HazelcastConstants.LIST_PREFIX);",
"from(\"direct:removevalue\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE)) .toF(\"hazelcast-%sbar\", HazelcastConstants.LIST_PREFIX);",
"fromF(\"hazelcast-%smm\", HazelcastConstants.LIST_PREFIX) .log(\"object...\") .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.ADDED)) .log(\"...added\") .to(\"mock:added\") .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.REMOVED)) .log(\"...removed\") .to(\"mock:removed\") .otherwise() .log(\"fail!\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-list-component |
Appendix E. Pools, placement groups, and CRUSH configuration options | Appendix E. Pools, placement groups, and CRUSH configuration options The Ceph options that govern pools, placement groups, and the CRUSH algorithm. Configuration option Description Type Default mon_allow_pool_delete Allows a monitor to delete a pool. In RHCS 3 and later releases, the monitor cannot delete the pool by default as an added measure to protect data. Boolean false mon_max_pool_pg_num The maximum number of placement groups per pool. Integer 65536 mon_pg_create_interval Number of seconds between PG creation in the same Ceph OSD Daemon. Float 30.0 mon_pg_stuck_threshold Number of seconds after which PGs can be considered as being stuck. 32-bit Integer 300 mon_pg_min_inactive Ceph issues a HEALTH_ERR status in the cluster log if the number of PGs that remain inactive longer than the mon_pg_stuck_threshold exceeds this setting. The default setting is one PG. A non-positive number disables this setting. Integer 1 mon_pg_warn_max_per_osd Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is greater than this setting. A non-positive number disables this setting. Integer 300 mon_pg_warn_min_per_osd Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is less than this setting. A non-positive number disables this setting. Integer 30 mon_pg_warn_min_objects Do not warn if the total number of objects in the cluster is below this number. Integer 1000 mon_pg_warn_min_pool_objects Do not warn on pools whose object number is below this number. Integer 1000 mon_pg_check_down_all_threshold The threshold of down OSDs by percentage after which Ceph checks all PGs to ensure they are not stuck or stale. Float 0.5 mon_pg_warn_max_object_skew Ceph issue a HEALTH_WARN status in the cluster log if the average number of objects in a pool is greater than mon pg warn max object skew times the average number of objects for all pools. A non-positive number disables this setting. Float 10 mon_delta_reset_interval The number of seconds of inactivity before Ceph resets the PG delta to zero. Ceph keeps track of the delta of the used space for each pool to aid administrators in evaluating the progress of recovery and performance. Integer 10 mon_osd_max_op_age The maximum age in seconds for an operation to complete before issuing a HEALTH_WARN status. Float 32.0 osd_pg_bits Placement group bits per Ceph OSD Daemon. 32-bit Integer 6 osd_pgp_bits The number of bits per Ceph OSD Daemon for Placement Groups for Placement purpose (PGPs). 32-bit Integer 6 osd_crush_chooseleaf_type The bucket type to use for chooseleaf in a CRUSH rule. Uses ordinal rank rather than name. 32-bit Integer 1 . Typically a host containing one or more Ceph OSD Daemons. osd_pool_default_crush_replicated_ruleset The default CRUSH ruleset to use when creating a replicated pool. 8-bit Integer 0 osd_pool_erasure_code_stripe_unit Sets the default size, in bytes, of a chunk of an object stripe for erasure coded pools. Every object of size S will be stored as N stripes, with each data chunk receiving stripe unit bytes. Each stripe of N * stripe unit bytes will be encoded/decoded individually. This option can be overridden by the stripe_unit setting in an erasure code profile. Unsigned 32-bit Integer 4096 osd_pool_default_size Sets the number of replicas for objects in the pool. The default value is the same as ceph osd pool set {pool-name} size {size} . 32-bit Integer 3 osd_pool_default_min_size Sets the minimum number of written replicas for objects in the pool in order to acknowledge a write operation to the client. If the minimum is not met, Ceph will not acknowledge the write to the client. This setting ensures a minimum number of replicas when operating in degraded mode. 32-bit Integer 0 , which means no particular minimum. If 0 , minimum is size - (size / 2) . osd_pool_default_pg_num The default number of placement groups for a pool. The default value is the same as pg_num with mkpool . 32-bit Integer 32 osd_pool_default_pgp_num The default number of placement groups for placement for a pool. The default value is the same as pgp_num with mkpool . PG and PGP should be equal. 32-bit Integer 0 osd_pool_default_flags The default flags for new pools. 32-bit Integer 0 osd_max_pgls The maximum number of placement groups to list. A client requesting a large number can tie up the Ceph OSD Daemon. Unsigned 64-bit Integer 1024 osd_min_pg_log_entries The minimum number of placement group logs to maintain when trimming log files. 32-bit Int Unsigned 250 osd_default_data_pool_replay_window The time, in seconds, for an OSD to wait for a client to replay a request. 32-bit Integer 45 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/pools-placement-groups-and-crush-configuration-options_conf |
Chapter 12. ca | Chapter 12. ca This chapter describes the commands under the ca command. 12.1. ca get Retrieve a CA by providing its URI. Usage: Table 12.1. Positional arguments Value Summary URI The uri reference for the ca. Table 12.2. Command arguments Value Summary -h, --help Show this help message and exit Table 12.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 12.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 12.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 12.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 12.2. ca list List CAs. Usage: Table 12.7. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT, -l LIMIT Specify the limit to the number of items to list per page (default: 10; maximum: 100) --offset OFFSET, -o OFFSET Specify the page offset (default: 0) --name NAME, -n NAME Specify the ca name (default: none) Table 12.8. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 12.9. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 12.10. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 12.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack ca get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] URI",
"openstack ca list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--limit LIMIT] [--offset OFFSET] [--name NAME]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/ca |
35.2. Configuring an iface for Software iSCSI | 35.2. Configuring an iface for Software iSCSI As mentioned earlier, an iface configuration is required for each network object that will be used to bind a session. Before To create an iface configuration for software iSCSI, run the following command: This will create a new empty iface configuration with a specified iface_name . If an existing iface configuration already has the same iface_name , then it will be overwritten with a new, empty one. To configure a specific setting of an iface configuration, use the following command: Example 35.4. Set MAC address of iface0 For example, to set the MAC address ( hardware_address ) of iface0 to 00:0F:1F:92:6B:BF , run: Warning Do not use default or iser as iface names. Both strings are special values used by iscsiadm for backward compatibility. Any manually-created iface configurations named default or iser will disable backwards compatibility. | [
"iscsiadm -m iface -I iface_name --op=new",
"iscsiadm -m iface -I iface_name --op=update -n iface. setting -v hw_address",
"iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iface-config-software-iscsi |
Chapter 11. Enabling and disabling features | Chapter 11. Enabling and disabling features Red Hat build of Keycloak has packed some functionality in features, including some disabled features, such as Technology Preview and deprecated features. Other features are enabled by default, but you can disable them if they do not apply to your use of Red Hat build of Keycloak. 11.1. Enabling features Some supported features, and all preview features, are disabled by default. To enable a feature, enter this command: bin/kc.[sh|bat] build --features="<name>[,<name>]" For example, to enable docker and token-exchange , enter this command: bin/kc.[sh|bat] build --features="docker,token-exchange" To enable all preview features, enter this command: bin/kc.[sh|bat] build --features="preview" Enabled feature may be versioned, or unversioned. If you use a versioned feature name, e.g. feature:v1, that exact feature version will be enabled as long as it still exists in the runtime. If you instead use an unversioned name, e.g. just feature, the selection of the particular supported feature version may change from release to release according to the following precedence: The highest default supported version The highest non-default supported version The highest deprecated version The highest preview version The highest experimental version 11.2. Disabling features To disable a feature that is enabled by default, enter this command: bin/kc.[sh|bat] build --features-disabled="<name>[,<name>]" For example to disable impersonation , enter this command: bin/kc.[sh|bat] build --features-disabled="impersonation" It is not allowed to have a feature in both the features-disabled list and the features list. When a feature is disabled all versions of that feature are disabled. 11.3. Supported features The following list contains supported features that are enabled by default, and can be disabled if not needed. account-api Account Management REST API account3 Account Console version 3 admin-api Admin API admin2 New Admin Console authorization Authorization Service ciba OpenID Connect Client Initiated Backchannel Authentication (CIBA) client-policies Client configuration policies device-flow OAuth 2.0 Device Authorization Grant hostname-v1 Hostname Options V1 impersonation Ability for admins to impersonate users js-adapter Host keycloak.js and keycloak-authz.js through the Keycloak server kerberos Kerberos par OAuth 2.0 Pushed Authorization Requests (PAR) step-up-authentication Step-up Authentication web-authn W3C Web Authentication (WebAuthn) 11.3.1. Disabled by default The following list contains supported features that are disabled by default, and can be enabled if needed. docker Docker Registry protocol fips FIPS 140-2 mode multi-site Multi-site support 11.4. Preview features Preview features are disabled by default and are not recommended for use in production. These features may change or be removed at a future release. admin-fine-grained-authz Fine-Grained Admin Permissions client-secret-rotation Client Secret Rotation dpop OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer recovery-codes Recovery codes scripts Write custom authenticators using JavaScript token-exchange Token Exchange Service update-email Update Email Action 11.5. Deprecated features The following list contains deprecated features that will be removed in a future release. These features are disabled by default. account2 Account Console version 2 linkedin-oauth LinkedIn Social Identity Provider based on OAuth offline-session-preloading Offline session preloading 11.6. Relevant options Value features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api[:v1] , account2[:v1] , account3[:v1] , admin-api[:v1] , admin-fine-grained-authz[:v1] , admin2[:v1] , authorization[:v1] , ciba[:v1] , client-policies[:v1] , client-secret-rotation[:v1] , client-types[:v1] , declarative-ui[:v1] , device-flow[:v1] , docker[:v1] , dpop[:v1] , dynamic-scopes[:v1] , fips[:v1] , hostname[:v1] , impersonation[:v1] , js-adapter[:v1] , kerberos[:v1] , linkedin-oauth[:v1] , login2[:v1] , multi-site[:v1] , offline-session-preloading[:v1] , oid4vc-vci[:v1] , par[:v1] , preview , recovery-codes[:v1] , scripts[:v1] , step-up-authentication[:v1] , token-exchange[:v1] , transient-users[:v1] , update-email[:v1] , web-authn[:v1] features-disabled 🛠 Disables a set of one or more features. CLI: --features-disabled Env: KC_FEATURES_DISABLED account-api , account2 , account3 , admin-api , admin-fine-grained-authz , admin2 , authorization , ciba , client-policies , client-secret-rotation , client-types , declarative-ui , device-flow , docker , dpop , dynamic-scopes , fips , impersonation , js-adapter , kerberos , linkedin-oauth , login2 , multi-site , offline-session-preloading , oid4vc-vci , par , preview , recovery-codes , scripts , step-up-authentication , token-exchange , transient-users , update-email , web-authn | [
"bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features=\"docker,token-exchange\"",
"bin/kc.[sh|bat] build --features=\"preview\"",
"bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features-disabled=\"impersonation\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/features- |
1.3. Configuring the iptables Firewall to Allow Cluster Components | 1.3. Configuring the iptables Firewall to Allow Cluster Components Note The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Table 1.1, "Ports to Enable for High Availability Add-On" shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for. You can enable all of these ports by means of the firewalld daemon by executing the following commands. Table 1.1. Ports to Enable for High Availability Add-On Port When Required TCP 2224 Required on all nodes (needed by the pcsd Web UI and required for node-to-node communication) It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbiters or the quorum device host. TCP 3121 Required on all nodes if the cluster has any Pacemaker Remote nodes Pacemaker's crmd daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host's network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes. TCP 5403 Required on the quorum device host when using a quorum device with corosync-qnetd . The default value can be changed with the -p option of the corosync-qnetd command. UDP 5404 Required on corosync nodes if corosync is configured for multicast UDP UDP 5405 Required on all corosync nodes (needed by corosync ) TCP 21064 Required on all nodes if the cluster contains any resources requiring DLM (such as clvm or GFS2 ) TCP 9929, UDP 9929 Required to be open on all cluster nodes and booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster. | [
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-HAAR |
Chapter 7. Validate glance images | Chapter 7. Validate glance images After enabling Barbican, you can configure the Image Service (glance) to verify that an uploaded image has not been tampered with. In this implementation, the image is first signed with a key that is stored in barbican. The image is then uploaded to glance, along with the accompanying signing information. As a result, the image's signature is verified before each use, with the instance build process failing if the signature does not match. Barbican's integration with glance means that you can use the openssl command with your private key to sign glance images before uploading them. 7.1. Enable glance image validation In your environment file, enable image verification with the VerifyGlanceSignatures: True setting. You must re-run the openstack overcloud deploy command for this setting to take effect. To verify that glance image validation is enabled, run the following command on an overcloud Compute node: Note If you use Ceph as the back end for the Image and Compute services, a CoW clone is created. Therefore, Image signing verification cannot be performed. 7.2. Validate an image To configure a glance image for validation, complete the following steps: Confirm that glance is configured to use barbican: Generate a private key and convert it to the required format: Add the key to the barbican secret store: Note Record the resulting UUID for use in a later step. In this example, the certificate's UUID is 5df14c2b-f221-4a02-948e-48a61edd3f5b . Use private_key.pem to sign the image and generate the .signature file. For example: Convert the resulting .signature file into base64 format: Load the base64 value into a variable to use it in the subsequent command: Upload the signed image to glance. For img_signature_certificate_uuid , you must specify the UUID of the signing key you previously uploaded to barbican: You can view glance's image validation activities in the Compute log: /var/log/containers/nova/nova-compute.log . For example, you can expect the following entry when the instance is booted: | [
"sudo crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf glance verify_glance_signatures",
"sudo crudini --get /var/lib/config-data/puppet-generated/glance_api/etc/glance/glance-api.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager",
"openssl genrsa -out private_key.pem 1024 openssl rsa -pubout -in private_key.pem -out public_key.pem openssl req -new -key private_key.pem -out cert_request.csr openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out x509_signing_cert.crt",
"source ~/overcloudrc openstack secret store --name signing-cert --algorithm RSA --secret-type certificate --payload-content-type \"application/octet-stream\" --payload-content-encoding base64 --payload \"USD(base64 x509_signing_cert.crt)\" -c 'Secret href' -f value https://192.168.123.170:9311/v1/secrets/5df14c2b-f221-4a02-948e-48a61edd3f5b",
"openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out cirros-0.4.0.signature cirros-0.4.0-x86_64-disk.img",
"base64 -w 0 cirros-0.4.0.signature > cirros-0.4.0.signature.b64",
"cirros_signature_b64=USD(cat cirros-0.4.0.signature.b64)",
"openstack image create --container-format bare --disk-format qcow2 --property img_signature=\"USDcirros_signature_b64\" --property img_signature_certificate_uuid=\"5df14c2b-f221-4a02-948e-48a61edd3f5b\" --property img_signature_hash_method=\"SHA-256\" --property img_signature_key_type=\"RSA-PSS\" cirros_0_4_0_signed --file cirros-0.4.0-x86_64-disk.img +--------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------+----------------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2018-01-23T05:37:31Z | | disk_format | qcow2 | | id | d3396fa0-2ea2-4832-8a77-d36fa3f2ab27 | | img_signature | lcI7nGgoKxnCyOcsJ4abbEZEpzXByFPIgiPeiT+Otjz0yvW00KNN3fI0AA6tn9EXrp7fb2xBDE4UaO3v | | | IFquV/s3mU4LcCiGdBAl3pGsMlmZZIQFVNcUPOaayS1kQYKY7kxYmU9iq/AZYyPw37KQI52smC/zoO54 | | | zZ+JpnfwIsM= | | img_signature_certificate_uuid | ba3641c2-6a3d-445a-8543-851a68110eab | | img_signature_hash_method | SHA-256 | | img_signature_key_type | RSA-PSS | | min_disk | 0 | | min_ram | 0 | | name | cirros_0_4_0_signed | | owner | 9f812310df904e6ea01e1bacb84c9f1a | | protected | False | | size | None | | status | queued | | tags | [] | | updated_at | 2018-01-23T05:37:31Z | | virtual_size | None | | visibility | shared | +--------------------------------+----------------------------------------------------------------------------------+",
"2018-05-24 12:48:35.256 1 INFO nova.image.glance [req-7c271904-4975-4771-9d26-cbea6c0ade31 b464b2fd2a2140e9a88bbdacf67bdd8c a3db2f2beaee454182c95b646fa7331f - default default] Image signature verification succeeded for image d3396fa0-2ea2-4832-8a77-d36fa3f2ab27"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/manage_secrets_with_openstack_key_manager/validate_glance_images |
Chapter 4. API index | Chapter 4. API index API API group AdminPolicyBasedExternalRoute k8s.ovn.org/v1 AlertingRule monitoring.openshift.io/v1 Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 AlertRelabelConfig monitoring.openshift.io/v1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleSample console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 EgressService k8s.ovn.org/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 FlowSchema flowcontrol.apiserver.k8s.io/v1beta3 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigNode machineconfiguration.openshift.io/v1alpha1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineConfiguration operator.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1beta3 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectReview authentication.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1 | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/api_overview/api-index |
Chapter 1. Introduction to OpenShift Data Foundation | Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/introduction-to-openshift-data-foundation-4_rhodf |
Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform | Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform You can install Red Hat Developer Hub on OpenShift Container Platform by using one of the following methods: The Red Hat Developer Hub Operator Ready for immediate use in OpenShift Container Platform after an administrator installs it with OperatorHub Uses Operator Lifecycle Management (OLM) to manage automated subscription updates on OpenShift Container Platform Requires preinstallation of Operator Lifecycle Management (OLM) to manage automated subscription updates on Kubernetes The Red Hat Developer Hub Helm chart Ready for immediate use in both OpenShift Container Platform and Kubernetes Requires manual installation and management Use the installation method that best meets your needs and preferences. Additional resources For more information about choosing an installation method, see Helm Charts vs. Operators For more information about the Operator method, see Understanding Operators . For more information about the Helm chart method, see Understanding Helm . 1.1. Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator You can install Red Hat Developer Hub on OpenShift Container Platform by using the Red Hat Developer Hub Operator in the OpenShift Container Platform console. 1.1.1. Installing the Red Hat Developer Hub Operator As an administrator, you can install the Red Hat Developer Hub Operator. Authorized users can use the Operator to install Red Hat Developer Hub on the following platforms: Red Hat OpenShift Container Platform (OpenShift Container Platform) Amazon Elastic Kubernetes Service (EKS) Microsoft Azure Kubernetes Service (AKS) OpenShift Container Platform is currently supported from version 4.13 to 4.15. See also the Red Hat Developer Hub Life Cycle . Containers are available for the following CPU architectures: AMD64 and Intel 64 (x86_64) Prerequisites You are logged in as an administrator on the OpenShift Container Platform web console. You have configured the appropriate roles and permissions within your project to create or access an application. For more information, see the Red Hat OpenShift Container Platform documentation on Building applications . Important For enhanced security, better control over the Operator lifecycle, and preventing potential privilege escalation, install the Red Hat Developer Hub Operator in a dedicated default rhdh-operator namespace. You can restrict other users' access to the Operator resources through role bindings or cluster role bindings. You can also install the Operator in another namespace by creating the necessary resources, such as an Operator group. For more information, see Installing global Operators in custom namespaces . However, if the Red Hat Developer Hub Operator shares a namespace with other Operators, then it shares the same update policy as well, preventing the customization of the update policy. For example, if one Operator is set to manual updates, the Red Hat Developer Hub Operator update policy is also set to manual. For more information, see Colocation of Operators in a namespace . Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > OperatorHub . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub Operator card. On the Red Hat Developer Hub Operator page, click Install . On the Install Operator page, use the Update channel drop-down menu to select the update channel that you want to use: The fast channel provides y-stream (x.y) and z-stream (x.y.z) updates, for example, updating from version 1.1 to 1.2, or from 1.1.0 to 1.1.1. Important The fast channel includes all of the updates available for a particular version. Any update might introduce unexpected changes in your Red Hat Developer Hub deployment. Check the release notes for details about any potentially breaking changes. The fast-1.1 channel only provides z-stream updates, for example, updating from version 1.1.1 to 1.1.2. If you want to update the Red Hat Developer Hub y-version in the future, for example, updating from 1.1 to 1.2, you must switch to the fast channel manually. On the Install Operator page, choose the Update approval strategy for the Operator: If you choose the Automatic option, the Operator is updated without requiring manual confirmation. If you choose the Manual option, a notification opens when a new update is released in the update channel. The update must be manually approved by an administrator before installation can begin. Click Install . Verification To view the installed Red Hat Developer Hub Operator, click View Operator . Additional resources Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator Installing from OperatorHub using the web console 1.1.2. Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator As a developer, you can deploy a Red Hat Developer Hub instance on OpenShift Container Platform by using the Developer Catalog in the Red Hat OpenShift Container Platform web console. This deployment method uses the Red Hat Developer Hub Operator. Prerequisites A cluster administrator has installed the Red Hat Developer Hub Operator. For more information, see Section 1.1.1, "Installing the Red Hat Developer Hub Operator" . You have added a custom configuration file to OpenShift Container Platform. For more information, see Adding a custom configuration file to OpenShift Container Platform . Procedure Create a project in OpenShift Container Platform for your Red Hat Developer Hub instance, or select an existing project. Tip For more information about creating a project in OpenShift Container Platform, see Creating a project by using the web console in the Red Hat OpenShift Container Platform documentation. From the Developer perspective on the OpenShift Container Platform web console, click +Add . From the Developer Catalog panel, click Operator Backed . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. Click Create . Add custom configurations for the Red Hat Developer Hub instance. On the Create Backstage page, click Create Verification After the pods are ready, you can access the Red Hat Developer Hub platform by opening the URL. Confirm that the pods are ready by clicking the pod in the Topology view and confirming the Status in the Details panel. The pod status is Active when the pod is ready. From the Topology view, click the Open URL icon on the Developer Hub pod. Additional resources OpenShift Container Platform - Building applications overview 1.2. Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart You can install Red Hat Developer Hub on OpenShift Container Platform by using the Helm chart with one of the following methods: The OpenShift Container Platform console The Helm CLI 1.2.1. Deploying Developer Hub from the OpenShift Container Platform web console with the Helm Chart You can use a Helm chart to install Developer Hub on the Red Hat OpenShift Container Platform web console. Helm is a package manager on OpenShift Container Platform that provides the following features: Applies regular application updates using custom hooks Manages the installation of complex applications Provides charts that you can host on public and private servers Supports rolling back to application versions The Red Hat Developer Hub Helm chart is available in the Helm catalog on OpenShift Dedicated and OpenShift Container Platform. Prerequisites You are logged in to your OpenShift Container Platform account. A user with the OpenShift Container Platform admin role has configured the appropriate roles and permissions within your project to create an application. For more information about OpenShift Container Platform roles, see Using RBAC to define and apply permissions . You have created a project in OpenShift Container Platform. For more information about creating a project in OpenShift Container Platform, see Red Hat OpenShift Container Platform documentation . Procedure From the Developer perspective on the Developer Hub web console, click +Add . From the Developer Catalog panel, click Helm Chart . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. From the Red Hat Developer Hub page, click Create . From your cluster, copy the OpenShift Container Platform router host (for example: apps.<clusterName>.com ). Select the radio button to configure the Developer Hub instance with either the form view or YAML view. The Form view is selected by default. Using Form view To configure the instance with the Form view, go to Root Schema global Enable service authentication within Backstage instance and paste your OpenShift Container Platform router host into the field on the form. Using YAML view To configure the instance with the YAML view, paste your OpenShift Container Platform router hostname in the global.clusterRouterBase parameter value as shown in the following example: global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations Edit the other values if needed. Note The information about the host is copied and can be accessed by the Developer Hub backend. When an OpenShift Container Platform route is generated automatically, the host value for the route is inferred and the same host information is sent to the Developer Hub. Also, if the Developer Hub is present on a custom domain by setting the host manually using values, the custom host takes precedence. Click Create and wait for the database and Developer Hub to start. Click the Open URL icon to start using the Developer Hub platform. Note Your developer-hub pod might be in a CrashLoopBackOff state if the Developer Hub container cannot access the configuration files. This error is indicated by the following log: Loaded config from app-config-from-configmap.yaml, env ... 2023-07-24T19:44:46.223Z auth info Configuring "database" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client' To resolve the error, verify the configuration files. 1.2.2. Deploying Developer Hub on OpenShift Container Platform with the Helm CLI You can use the Helm CLI to install Red Hat Developer Hub on Red Hat OpenShift Container Platform. Prerequisites You have installed the OpenShift CLI ( oc ) on your workstation. You are logged in to your OpenShift Container Platform account. A user with the OpenShift Container Platform admin role has configured the appropriate roles and permissions within your project to create an application. For more information about OpenShift Container Platform roles, see Using RBAC to define and apply permissions . You have created a project in OpenShift Container Platform. For more information about creating a project in OpenShift Container Platform, see Red Hat OpenShift Container Platform documentation . You have installed the Helm CLI tool. Procedure Create and activate the <rhdh> OpenShift Container Platform project: Install the Red Hat Developer Hub Helm chart: Configure your Developer Hub Helm chart instance with the Developer Hub database password and router base URL values from your OpenShift Container Platform cluster: Display the running Developer Hub instance URL: Verification Open the running Developer Hub instance URL in your browser to use Developer Hub. Additional resources Installing Helm | [
"global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations",
"Loaded config from app-config-from-configmap.yaml, env 2023-07-24T19:44:46.223Z auth info Configuring \"database\" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client'",
"NAMESPACE=<emphasis><rhdh></emphasis> new-project USD{NAMESPACE} || oc project USD{NAMESPACE}",
"helm upgrade redhat-developer-hub -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz",
"PASSWORD=USD(oc get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade redhat-developer-hub -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"",
"echo \"https://redhat-developer-hub-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\""
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/installing_red_hat_developer_hub_on_openshift_container_platform/assembly-install-rhdh-ocp |
10.5. Statistical Information | 10.5. Statistical Information FS-Cache also keeps track of general statistical information. To view this information, use: FS-Cache statistics includes information on decision points and object counters. For more information, see the following kernel document: /usr/share/doc/kernel-doc- version /Documentation/filesystems/caching/fscache.txt | [
"cat /proc/fs/fscache/stats"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/fscachestats |
Chapter 2. Installing the MTA plugin for IntelliJ IDEA | Chapter 2. Installing the MTA plugin for IntelliJ IDEA You can install the MTA plugin in the Ultimate and the Community Edition releases of IntelliJ IDEA. Prerequisites The following are the prerequisites for the Migration Toolkit for Applications (MTA) installation: Java Development Kit (JDK) is installed. MTA supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. The latest version of mta-cli from the MTA download page Procedure In IntelliJ IDEA, click the Plugins tab on the Welcome screen. Enter Migration Toolkit for Applications in the Search field on the Marketplace tab. Select the Migration Toolkit for Applications (MTA) by Red Hat plugin and click Install . The plugin is listed on the Installed tab. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/intellij_idea_plugin_guide/intellij-idea-plugin-extension_idea-plugin-guide |
Chapter 9. Viewing logs for a resource | Chapter 9. Viewing logs for a resource You can view the logs for various resources, such as builds, deployments, and pods by using the OpenShift CLI (oc) and the web console. Note Resource logs are a default feature that provides limited log viewing capability. To enhance your log retrieving and viewing experience, it is recommended that you install OpenShift Logging . The logging subsystem aggregates all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs, into a dedicated log store. You can then query, discover, and visualize your log data through the Kibana interface . Resource logs do not access the logging subsystem log store. 9.1. Viewing resource logs You can view the log for various resources in the OpenShift CLI (oc) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI (oc). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. | [
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/vewing-resource-logs |
Chapter 12. Volume Snapshots | Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/volume-snapshots_rhodf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.