title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 9. NUMA | Chapter 9. NUMA 9.1. Introduction Historically, all memory on x86 systems is equally accessible by all CPUs. Known as Uniform Memory Access (UMA), access times are the same no matter which CPU performs the operation. This behavior is no longer the case with recent x86 processors. In Non-Uniform Memory Access (NUMA), system memory is divided into zones (called nodes ), which are allocated to particular CPUs or sockets. Access to memory that is local to a CPU is faster than memory connected to remote CPUs on that system. This chapter describes memory allocation and NUMA tuning configurations in virtualized environments. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-numa |
Chapter 14. Using bound service account tokens | Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optional: Set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Important If you change the service account issuer to a custom one, the service account issuer is still trusted for the 24 hours. You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster. Perform a rolling node restart: Warning It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster. Restart nodes sequentially. Wait for the node to become fully available before restarting the node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again. Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. Run the following command: USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4 1 A reference to an existing service account. 2 The path relative to the mount point of the file to project the token into. 3 Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 4 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. Additional resources Rebooting a node gracefully | [
"oc edit authentications cluster",
"spec: serviceAccountIssuer: https://test.default.svc 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4",
"oc create -f pod-projected-svc-token.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/bound-service-account-tokens |
Chapter 8. Writing a custom SELinux policy | Chapter 8. Writing a custom SELinux policy To run your applications confined by SELinux, you must write and use a custom policy. 8.1. Custom SELinux policies and related tools An SELinux security policy is a collection of SELinux rules. A policy is a core component of SELinux and is loaded into the kernel by SELinux user-space tools. The kernel enforces the use of an SELinux policy to evaluate access requests on the system. By default, SELinux denies all requests except for requests that correspond to the rules specified in the loaded policy. Each SELinux policy rule describes an interaction between a process and a system resource: You can read this example rule as: The Apache process can read its logging file . In this rule, apache_process and apache_log are labels . An SELinux security policy assigns labels to processes and defines relations to system resources. This way, a policy maps operating-system entities to the SELinux layer. SELinux labels are stored as extended attributes of file systems, such as ext2 . You can list them using the getfattr utility or a ls -Z command, for example: Where system_u is an SELinux user, object_r is an example of the SELinux role, and passwd_file_t is an SELinux domain. The default SELinux policy provided by the selinux-policy packages contains rules for applications and daemons that are parts of Red Hat Enterprise Linux 8 and are provided by packages in its repositories. Applications not described in this distribution policy are not confined by SELinux. To change this, you have to modify the policy using a policy module, which contains additional definitions and rules. In Red Hat Enterprise Linux 8, you can query the installed SELinux policy and generate new policy modules using the sepolicy tool. Scripts that sepolicy generates together with the policy modules always contain a command using the restorecon utility. This utility is a basic tool for fixing labeling problems in a selected part of a file system. Additional resources sepolicy(8) and getfattr(1) man pages on your system Quick start to write a custom SELinux policy Knowledgebase article 8.2. Creating and enforcing an SELinux policy for a custom application You can confine applications by SELinux to increase the security of host systems and users' data. Because each application has specific requirements, modify this example procedure for creating an SELinux policy that confines a simple daemon according to your use case. Prerequisites The selinux-policy-devel package and its dependencies are installed on your system. Procedure For this example procedure, prepare a simple daemon that opens the /var/log/messages file for writing: Create a new file, and open it in a text editor of your choice: Insert the following code: #include <unistd.h> #include <stdio.h> FILE *f; int main(void) { while(1) { f = fopen("/var/log/messages","w"); sleep(5); fclose(f); } } Compile the file: Create a systemd unit file for your daemon: Install and start the daemon: Check that the new daemon is not confined by SELinux: Generate a custom policy for the daemon: Rebuild the system policy with the new policy module using the setup script created by the command: Note that the setup script relabels the corresponding part of the file system using the restorecon command: Restart the daemon, and check that it now runs confined by SELinux: Because the daemon is now confined by SELinux, SELinux also prevents it from accessing /var/log/messages . Display the corresponding denial message: You can get additional information also using the sealert tool: Use the audit2allow tool to suggest changes: Because rules suggested by audit2allow can be incorrect for certain cases, use only a part of its output to find the corresponding policy interface. Inspect the logging_write_generic_logs(mydaemon_t) macro with the macro-expander tool, to see all allow rules the macro provides: In this case, you can use the suggested interface, because it only provides read and write access to log files and their parent directories. Add the corresponding rule to your type enforcement file: Alternatively, you can add this rule instead of using the interface: Reinstall the policy: Verification Check that your application runs confined by SELinux, for example: Verify that your custom application does not cause any SELinux denials: Additional resources sepolgen(8) , ausearch(8) , audit2allow(1) , audit2why(1) , sealert(8) , and restorecon(8) man pages on your system Quick start to write a custom SELinux policy Knowledgebase article 8.3. Additional resources SELinux Policy Workshop | [
"ALLOW apache_process apache_log:FILE READ;",
"ls -Z /etc/passwd system_u:object_r:passwd_file_t:s0 /etc/passwd",
"vi mydaemon.c",
"#include <unistd.h> #include <stdio.h> FILE *f; int main(void) { while(1) { f = fopen(\"/var/log/messages\",\"w\"); sleep(5); fclose(f); } }",
"gcc -o mydaemon mydaemon.c",
"vi mydaemon.service [Unit] Description=Simple testing daemon [Service] Type=simple ExecStart=/usr/local/bin/mydaemon [Install] WantedBy=multi-user.target",
"cp mydaemon /usr/local/bin/ cp mydaemon.service /usr/lib/systemd/system systemctl start mydaemon systemctl status mydaemon ● mydaemon.service - Simple testing daemon Loaded: loaded (/usr/lib/systemd/system/mydaemon.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2020-05-23 16:56:01 CEST; 19s ago Main PID: 4117 (mydaemon) Tasks: 1 Memory: 148.0K CGroup: /system.slice/mydaemon.service └─4117 /usr/local/bin/mydaemon May 23 16:56:01 localhost.localdomain systemd[1]: Started Simple testing daemon.",
"ps -efZ | grep mydaemon system_u:system_r:unconfined_service_t:s0 root 4117 1 0 16:56 ? 00:00:00 /usr/local/bin/mydaemon",
"sepolicy generate --init /usr/local/bin/mydaemon Created the following files: /home/example.user/mysepol/mydaemon.te # Type Enforcement file /home/example.user/mysepol/mydaemon.if # Interface file /home/example.user/mysepol/mydaemon.fc # File Contexts file /home/example.user/mysepol/mydaemon_selinux.spec # Spec file /home/example.user/mysepol/mydaemon.sh # Setup Script",
"./mydaemon.sh Building and Loading Policy + make -f /usr/share/selinux/devel/Makefile mydaemon.pp Compiling targeted mydaemon module Creating targeted mydaemon.pp policy package rm tmp/mydaemon.mod.fc tmp/mydaemon.mod + /usr/sbin/semodule -i mydaemon.pp",
"restorecon -v /usr/local/bin/mydaemon /usr/lib/systemd/system",
"systemctl restart mydaemon ps -efZ | grep mydaemon system_u:system_r:mydaemon_t:s0 root 8150 1 0 17:18 ? 00:00:00 /usr/local/bin/mydaemon",
"ausearch -m AVC -ts recent type=AVC msg=audit(1590247112.719:5935): avc: denied { open } for pid=8150 comm=\"mydaemon\" path=\"/var/log/messages\" dev=\"dm-0\" ino=2430831 scontext=system_u:system_r:mydaemon_t:s0 tcontext=unconfined_u:object_r:var_log_t:s0 tclass=file permissive=1",
"sealert -l \"*\" SELinux is preventing mydaemon from open access on the file /var/log/messages. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that mydaemon should be allowed open access on the messages file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: ausearch -c 'mydaemon' --raw | audit2allow -M my-mydaemon semodule -X 300 -i my-mydaemon.pp Additional Information: Source Context system_u:system_r:mydaemon_t:s0 Target Context unconfined_u:object_r:var_log_t:s0 Target Objects /var/log/messages [ file ] Source mydaemon ...",
"ausearch -m AVC -ts recent | audit2allow -R require { type mydaemon_t; } #============= mydaemon_t ============== logging_write_generic_logs(mydaemon_t)",
"macro-expander \"logging_write_generic_logs(mydaemon_t)\" allow mydaemon_t var_t:dir { getattr search open }; allow mydaemon_t var_log_t:dir { getattr search open read lock ioctl }; allow mydaemon_t var_log_t:dir { getattr search open }; allow mydaemon_t var_log_t:file { open { getattr write append lock ioctl } }; allow mydaemon_t var_log_t:dir { getattr search open }; allow mydaemon_t var_log_t:lnk_file { getattr read };",
"echo \"logging_write_generic_logs(mydaemon_t)\" >> mydaemon.te",
"echo \"allow mydaemon_t var_log_t:file { open write getattr };\" >> mydaemon.te",
"./mydaemon.sh Building and Loading Policy + make -f /usr/share/selinux/devel/Makefile mydaemon.pp Compiling targeted mydaemon module Creating targeted mydaemon.pp policy package rm tmp/mydaemon.mod.fc tmp/mydaemon.mod + /usr/sbin/semodule -i mydaemon.pp",
"ps -efZ | grep mydaemon system_u:system_r:mydaemon_t:s0 root 8150 1 0 17:18 ? 00:00:00 /usr/local/bin/mydaemon",
"ausearch -m AVC -ts recent <no matches>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/writing-a-custom-selinux-policy_using-selinux |
Appendix A. API Response Codes | Appendix A. API Response Codes The Red Hat Satellite 6 API provides HTTP response status codes for API calls. The following codes are common for all resources in the Satellite API. Table A.1. API Response Codes Response Explanation 200 OK For a successful request action: show, index, update, or delete (GET, PUT, DELETE requests). 201 Created For a successful create action (POST request). 301 Moved Permanently Redirect when Satellite is restricted to use HTTPS and HTTP is attempted. 400 Bad Request A required parameter is missing or the search query has invalid syntax. 401 Unauthorized Failed to authorize the user (for example, incorrect credentials). 403 Forbidden The user has insufficient permissions to perform the action or read the resource, or the action is unsupported in general. 404 Not Found The record with the given ID does not exist. It can appear in show and delete actions when the requested record does not exist; or in create, update and delete actions when one of the associated records does not exist. 409 Conflict Could not delete the record due to existing dependencies (for example, host groups with hosts). 415 Unsupported Media Type The content type of the HTTP request is not JSON. 422 Unprocessable Entity Failed to create an entity due to some validation errors. Applies to create or update actions only. 500 Internal Server Error Unexpected internal server error. 503 Service Unavailable The server is not running. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/appe-response_codes |
DM Multipath | DM Multipath Red Hat Enterprise Linux 4 DM Multipath Configuration and Administration Edition 1.0 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/index |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.3/rn-openjdk-support-policy |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview Red Hat OpenShift Service on AWS can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your Red Hat OpenShift Service on AWS environment to locally manage images. This overview contains reference information and links for registries commonly used with Red Hat OpenShift Service on AWS, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, a public or private cloud, or your local host. Image Registry Operator The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of Red Hat OpenShift Service on AWS images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, which serves most of the container images and Operators to Red Hat OpenShift Service on AWS clusters. OpenShift image registry OpenShift image registry is the registry provided by Red Hat OpenShift Service on AWS to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the Red Hat OpenShift Service on AWS instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry Red Hat OpenShift Service on AWS provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and image streams. Additional resources Image Registry Operator in Red Hat OpenShift Service on AWS 1.3. Third-party registries Red Hat OpenShift Service on AWS can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, Red Hat OpenShift Service on AWS will fetch tags from the remote registry upon image stream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication Red Hat OpenShift Service on AWS can communicate with registries to access private image repositories using credentials supplied by the user. This allows Red Hat OpenShift Service on AWS to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from Red Hat OpenShift Service on AWS like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on Red Hat OpenShift Service on AWS. Following the move to the new registry, the existing registry will be available for a period of time. Note Red Hat OpenShift Service on AWS pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with Red Hat OpenShift Service on AWS, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside Red Hat OpenShift Service on AWS. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All image streams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the image streams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/registry/registry-overview-1 |
2. Working with ISO Images | 2. Working with ISO Images This section will explain how to extract an ISO image provided by Red Hat, and how to create a new boot image containing changes you made following other procedures in this book. 2.1. Extracting Red Hat Enterprise Linux Boot Images Before you start customizing the installer, you must download Red Hat-provided boot images. These images will be required to perform all procedures described in this book. You can obtain Red Hat Enterprise Linux 7 boot media from the Red Hat Customer Portal after logging in to your account. Your account must have sufficient entitlements to download Red Hat Enterprise Linux 7 images. Download either the Binary DVD or Boot ISO image from the Customer Portal. Either of these can be modified using procedures in this guide; other available downloads, such as the KVM Guest Image or Supplementary DVD can not. The variant of the image (such as Server or ComputeNode ) does not matter in this case; any variant can be used. For detailed download instructions and description of the Binary DVD and Boot ISO downloads, see the Red Hat Enterprise Linux 7 Installation Guide . After your chosen iso image finishes downloading, follow the procedure below to extract its contents in order to prepare for their modification. Procedure 1. Extracting ISO Images Mount the downloaded image. Replace path/to/image.iso with the path to the downloaded ISO. Also make sure that the target directory ( /mnt/iso ) exists and nothing else is currently mounted there. Create a working directory - a directory where you want to place the contents of the ISO image. Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership. Unmount the image. After you finish unpacking, the ISO image is extracted in your /tmp/ISO where you can modify its contents. Continue with Section 3, "Customizing the Boot Menu" or Section 5, "Developing Installer Add-ons" . Once you finish making changes, create a new, modified ISO image using the instructions in Section 2.3, "Creating Custom Boot Images" . 2.2. Creating a product.img File A product.img image file is an archive containing files which replace existing files or add new ones in the installer runtime. During boot, Anaconda loads this file from the images/ directory on the boot media. Then, it uses files present inside this file to replace identically named files in the installer's file system; this is necessary to customize the installer (for example, for replacing default images with custom ones). The product.img image must contain a directory structure identical to the installer. Specifically, two topics discussed in this guide require you to create a product image. The table below lists the correct locations inside the image file directory structure: Table 1. Locations of Add-ons and Anaconda Visuals Type of custom content File system location Pixmaps (logo, side bar, top bar, etc.) /usr/share/anaconda/pixmaps/ Banners for the installation progress screen /usr/share/anaconda/pixmaps/rnotes/en/ GUI stylesheet /usr/share/anaconda/anaconda-gtk.css Installclasses (for changing the product name) /run/install/product/pyanaconda/installclasses/ Anaconda add-ons /usr/share/anaconda/addons/ The procedure below explains how to create a valid product.img file. Procedure 2. Creating product.img Navigate to a working directory such as /tmp , and create a subdirectory named product/ : Create a directory structure which is identical to the location of the file you want to replace. For example, if you want to test an add-on, which belongs in the /usr/share/anaconda/addons directory on the installation system; create the same structure in your working directory: Note You can browse the installer's runtime file system by booting the installation, switching to virtual console 1 ( Ctrl + Alt + F1 ) and then switching to the second tmux window ( Ctrl + b 2 ). This opens a shell prompt which you can use to browse the file system. Place your customized files (in this example, custom add-on for Anaconda ) into the newly created directory: Repeat the two steps above (create a directory structure and move modified files into it) for every file you want to add to the installer. Create a .buildstamp file in the root of the directory which will become the product.img file. The .buildstamp file describes the system version and several other parameters. The following is an example of a .buildstamp file from Red Hat Enterprise Linux 7.4: Note the IsFinal parameter, which specifies whether the image is for a release (GA) version of the product ( True ), or a pre-release such as Alpha, Beta, or an internal milestone ( False ). Change into the product/ directory, and create the product.img archive: This creates a product.img file one level above the product/ directory. Move the product.img file to the images/ directory of the extracted ISO image. After finishing this procedure, your customizations are placed in the correct directory. You can continue with Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO image with your changes included. The product.img file will be automatically loaded when starting the installer. Note Instead of adding the product.img file on the boot media, you can place this file into a different location and use the inst.updates= boot option at the boot menu to load it. In that case, the image file can have any name, and it can be placed in any location (USB flash drive, hard disk, HTTP, FTP or NFS server), as long as this location is reachable from the installation system. See the Red Hat Enterprise Linux 7 Installation Guide for more information about Anaconda boot options. 2.3. Creating Custom Boot Images When you finish customizing boot images provided by Red Hat, you must create a new image which includes changes you made. To do this, follow the procedure below. Procedure 3. Creating ISO Images Make sure that all of your changes are included in the working directory. For example, if you are testing an add-on, make sure to place the product.img in the images/ directory. Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso . Create the new ISO image using genisoimage : In the above example: Make sure that values for the -V , -volset , and -A options match the image's boot loader configuration, if you are using the LABEL= directive for options which require a location to load a file on the same disk. If your boot loader configuration ( isolinux/isolinux.cfg for BIOS and EFI/BOOT/grub.cfg for UEFI) uses the inst.stage2=LABEL= disk_label stanza to load the second stage of the installer from the same disk, then the disk labels must match. Important In boot loader configuration files, replace all spaces in disk labels with \x20 . For example, if you create an ISO image with a label of RHEL 7.1 , boot loader configuration should use RHEL\x207.1 to refer to this label. Replace the value of the -o option ( -o ../NEWISO.iso ) with the file name of your new image. The value in the example will create file NEWISO.iso in the directory above the current one. For more information about this command, see the genisoimage(1) man page. Implant an MD5 checksum into the image. Without performing this step, image verification check (the rd.live.check option in the boot loader configuration) will fail and you will not be able to continue with the installation. In the above example, replace ../NEWISO.iso with the file name and location of the ISO image you have created in the step. After finishing this procedure, you can write the new ISO image to physical media or a network server to boot it on physical hardware, or you can use it to start installing a virtual machine. See the Red Hat Enterprise Linux 7 Installation Guide for instructions on preparing boot media or network server, and the Red Hat Enterprise Linux 7 Virtualization Getting Started Guide for instructions on creating virtual machines with ISO images. | [
"mount -t iso9660 -o loop path/to/image.iso /mnt/iso",
"mkdir /tmp/ISO",
"cp -pRf /mnt/iso /tmp/ISO",
"umount /mnt/iso",
"cd /tmp",
"mkdir product/",
"mkdir -p product/usr/share/anaconda/addons",
"cp -r ~/path/to/custom/addon/ product/usr/share/anaconda/addons/",
"[Main] Product=Red Hat Enterprise Linux Version=7.4 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=201707110057.x86_64 [Compose] Lorax=19.6.92-1",
"cd product",
"find . | cpio -c -o | gzip -9cv > ../product.img",
"genisoimage -U -r -v -T -J -joliet-long -V \" RHEL-7.1 Server.x86_64 \" -volset \" RHEL-7.1 Server.x86_64 \" -A \" RHEL-7.1 Server.x86_64 \" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .",
"implantisomd5 ../NEWISO.iso"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/anaconda_customization_guide/sect-iso-images |
Chapter 10. Integrating by using the syslog protocol | Chapter 10. Integrating by using the syslog protocol Syslog is an event logging protocol that applications use to send messages to a central location, such as a SIEM or a syslog collector, for data retention and security investigations. With Red Hat Advanced Cluster Security for Kubernetes, you can send alerts and audit events using the syslog protocol. Note Forwarding events by using the syslog protocol requires the Red Hat Advanced Cluster Security for Kubernetes version 3.0.52 or newer. When you use the syslog integration, Red Hat Advanced Cluster Security for Kubernetes forwards both violation alerts that you configure and all audit events. Currently, Red Hat Advanced Cluster Security for Kubernetes only supports CEF (Common Event Format). The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with a syslog events receiver: Set up a syslog events receiver to receive alerts. Use the receiver's address and port number to set up notifications in the Red Hat Advanced Cluster Security for Kubernetes. After the configuration, Red Hat Advanced Cluster Security for Kubernetes automatically sends all violations and audit events to the configured syslog receiver. 10.1. Configuring syslog integration with Red Hat Advanced Cluster Security for Kubernetes Create a new syslog integration in Red Hat Advanced Cluster Security for Kubernetes (RHACS). Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Syslog . Click New Integration (add icon). Enter a name for Integration Name . Select the Logging Facility value from local0 through local7 . Enter your Receiver Host address and Receiver Port number. If you are using TLS, turn on the Use TLS toggle. If your syslog receiver uses a certificate that is not trusted, turn on the Disable TLS Certificate Validation (Insecure) toggle. Otherwise, leave this toggle off. Click Add new extra field to add extra fields. For example, if your syslog receiver accepts objects from multiple sources, type source and rhacs in the Key and Value fields. You can filter using the custom values in your syslog receiver to identify all alerts from RHACS. Select Test ( checkmark icon) to send a test message to verify that the integration with your generic webhook is working. Select Create ( save icon) to create the configuration. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-using-syslog-protocol |
Chapter 3. View OpenShift Data Foundation Topology | Chapter 3. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/viewing-odf-topology_rhodf |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/making-open-source-more-inclusive |
Chapter 54. project | Chapter 54. project This chapter describes the commands under the project command. 54.1. project cleanup Clean resources associated with a project Usage: Table 54.1. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --created-before <YYYY-MM-DDTHH24:MI:SS> Drop resources created before the given time --updated-before <YYYY-MM-DDTHH24:MI:SS> Drop resources updated before the given time --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 54.2. project create Create new project Usage: Table 54.2. Positional arguments Value Summary <project-name> New project name Table 54.3. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning the project (name or id) --parent <project> Parent of the project (name or id) --description <description> Project description --enable Enable project --disable Disable project --property <key=value> Add a property to <name> (repeat option to set multiple properties) --or-show Return existing project --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) Table 54.4. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 54.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.6. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 54.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.3. project delete Delete project(s) Usage: Table 54.8. Positional arguments Value Summary <project> Project(s) to delete (name or id) Table 54.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) 54.4. project list List projects Usage: Table 54.10. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter projects by <domain> (name or id) --parent <parent> Filter projects whose parent is <parent> (name or id) --user <user> Filter projects by <user> (name or id) --my-projects List projects for the authenticated user. supersedes other filters. --long List additional fields in output --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc), repeat this option to specify multiple keys and directions. --tags <tag>[,<tag>,... ] List projects which have all given tag(s) (comma- separated list of tags) --tags-any <tag>[,<tag>,... ] List projects which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude projects which have all given tag(s) (comma- separated list of tags) --not-tags-any <tag>[,<tag>,... ] Exclude projects which have any given tag(s) (comma- separated list of tags) Table 54.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 54.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 54.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 54.5. project purge Clean resources associated with a project Usage: Table 54.15. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --keep-project Clean project resources, but don't delete the project --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 54.6. project set Set project properties Usage: Table 54.16. Positional arguments Value Summary <project> Project to modify (name or id) Table 54.17. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set project name --domain <domain> Domain owning <project> (name or id) --description <description> Set project description --enable Enable project --disable Disable project --property <key=value> Set a property on <project> (repeat option to set multiple properties) --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) --clear-tags Clear tags associated with the project. specify both --tag and --clear-tags to overwrite current tags --remove-tag <tag> Tag to be deleted from the project (repeat option to delete multiple tags) 54.7. project show Display project details Usage: Table 54.18. Positional arguments Value Summary <project> Project to display (name or id) Table 54.19. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) --parents Show the project's parents as a list --children Show project's subtree (children) as a list Table 54.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 54.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 54.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 54.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack project cleanup [-h] [--dry-run] (--auth-project | --project <project>) [--created-before <YYYY-MM-DDTHH24:MI:SS>] [--updated-before <YYYY-MM-DDTHH24:MI:SS>] [--project-domain <project-domain>]",
"openstack project create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parent <project>] [--description <description>] [--enable | --disable] [--property <key=value>] [--or-show] [--immutable | --no-immutable] [--tag <tag>] <project-name>",
"openstack project delete [-h] [--domain <domain>] <project> [<project> ...]",
"openstack project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>] [--parent <parent>] [--user <user>] [--my-projects] [--long] [--sort <key>[:<direction>]] [--tags <tag>[,<tag>,...]] [--tags-any <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-tags-any <tag>[,<tag>,...]]",
"openstack project purge [-h] [--dry-run] [--keep-project] (--auth-project | --project <project>) [--project-domain <project-domain>]",
"openstack project set [-h] [--name <name>] [--domain <domain>] [--description <description>] [--enable | --disable] [--property <key=value>] [--immutable | --no-immutable] [--tag <tag>] [--clear-tags] [--remove-tag <tag>] <project>",
"openstack project show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parents] [--children] <project>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/project |
Using the Streams for Apache Kafka Console | Using the Streams for Apache Kafka Console Red Hat Streams for Apache Kafka 2.9 The Streams for Apache Kafka Console supports your deployment of Streams for Apache Kafka. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_console/index |
Part V. Troubleshooting and tips | Part V. Troubleshooting and tips | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/troubleshooting_and_tips |
7.19. btrfs-progs | 7.19. btrfs-progs 7.19.1. RHBA-2013:0456 - btrfs-progs bug fix and enhancement update Updated btrfs-progs packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The btrfs-progs packages provide user-space programs to create, check, modify, and correct any inconsistencies in a Btrfs file system. Note The btrfs-progs packages have been upgraded to upstream version 0.2, which provides a number of bug fixes and enhancements over the version, including support for slashes in file system labels and new commands "btrfs-find-root", "btrfs-restore", and "btrfs-zero-log". This update also modifies the btrfs-progs utility, so that it is now built with the -fno-strict-aliasing method. (BZ# 865600 ) All users of btrfs-progs are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/btrfs-progs |
Chapter 7. Troubleshooting conversions | Chapter 7. Troubleshooting conversions This chapter lists troubleshooting resources and tips. 7.1. Troubleshooting resources To help you troubleshoot issues that can occur during the conversion process, review the log messages that are printed to the console and log files. Console Output By default, only info, warning, error, and critical log level messages are printed to the console output by the Convert2RHEL utility. To also print debug messages, use the --debug option with the convert2rhel command. Logs The /var/log/convert2rhel/convert2rhel.log file lists debug, info, warning, error, and critical messages. The /var/log/convert2rhel/rpm_va.log file lists all package files on the unconverted system that a user has modified. This output is generated by the rpm -Va command, which is run automatically unless the --no-rpm-va option is used with the convert2rhel command. 7.2. Fixing dependency errors During a conversion from a different Linux distribution to RHEL, certain packages might be installed without some of their dependencies. Prerequisites You have successfully completed the conversion to RHEL. See Converting to a RHEL system for more information. Procedure Identify dependencies errors: If the command displays no output, no further actions are required. To fix dependency errors, reinstall the affected packages. During this operation, the yum utility automatically installs missing dependencies. If the required dependencies are not provided by repositories available on the system, install those packages manually. 7.3. Troubleshooting issues with Red Hat Insights conversions The following issues might occur when using Red Hat Insights to convert to RHEL. 7.3.1. Missing systems in pre-conversion analysis task When running the Pre-conversion analysis for converting to RHEL task in Red Hat Insights, CentOS Linux 7 systems that appeared correctly in RHEL Inventory might not appear in the list of available systems to run the pre-conversion analysis on. This issue occurs when the Remote Host Configuration (RHC) is disconnected. Procedure Log in to the Red Hat Hybrid Cloud Console and go to Red Hat Insights > RHEL > Inventory > Systems . Select the affected system from the table. In the General Information tab, go to the System Status card and verify the RHC status: If the RHC status is Connected , RHC is connected correctly. If the RHC status is Not available , RHC is disconnected. Proceed to the step to reconnect RHC. Unregister the system in your terminal: To help with troubleshooting, set the RHC systemd service ( rhcd ) logging to the highest level: Register your system with Red Hat Insights and re-enable RHC in your terminal: Replace activation_key and organization_ID with the activation key and organization ID from the Red Hat Customer Portal. Verification Verify that you can select the system in the Pre-conversion analysis for converting to RHEL task. If the system still does not appear correctly, review error messages from rhcd and the insights-client tool: 7.3.2. Pre-conversion analysis task fails to complete After running the Pre-conversion analysis for converting to RHEL task, one or more of the systems can fail to generate a report with the error message Task failed to complete for an unknown reason. Retry this task at a later time. If this issue occurs, complete the steps below to troubleshoot. Procedure Verify if the affected system is unavailable, for example because of a network accessibility issue or because the system is shut off. Review the RHC systemd service ( rhcd ) for errors: Stop rhcd in your terminal: Set rhcd logging to the highest level: Restart rhcd : Review error messages posted by rhcd : Review the rhc-worker-script log file for errors: 7.4. Known issues and limitations The following issues and limitations are known to occur during the conversion: Systems that connect to the Internet using an HTTP proxy server cannot convert using Red Hat CDN or Satellite through RHSM by using the command line. To work around this problem, enable HTTP proxy for yum and then configure the HTTP proxy for RHSM: Configure yum to use an HTTP proxy. For more information, see the Red Hat Knowledgebase solution How to enable proxy settings for yum command on RHEL? Install the subscription-manager package: Download the Red Hat GPG key: Install a repository file for the client-tools repository that contains the subscription-manager package: conversions to RHEL 7: For conversions to RHEL 8: For conversions to RHEL 9: If you are converting to an earlier version of RHEL 8, for example, RHEL 8.5, update the USDreleasever value in the client-tools repository: Replace release_version with the correct release version, for example 8.5 or 8.8 . Install the following subscription-manager packages: Configure HTTP proxy for RHSM. For more information, see the Red Hat Knowledgebase solution How to configure HTTP Proxy for Red Hat Subscription Management . Register the system with RHSM: Replace organization_id and activation_key with the organization ID and activation key from the Red Hat Customer Portal. Remove the organization ID and activation key from the /etc/convert2rhel.ini file. Perform the conversion to RHEL: ( RHELC-559 ) UEFI systems with Secure Boot enabled are not supported for conversion. To work around this issue, complete the following steps: Disable Secure Boot before the conversion. For more information, see the Red Hat Knowledgebase solution convert2rhel fails on UEFI systems with Secure Boot enabled . Perform the conversion to RHEL. If converting from Oracle Linux 7 or Alma Linux 8, install the shim-x64 package: Re-enable Secure Boot. ( RHELC-138 ) If you are converting by using Red Hat Insights, running two RHC daemon (rhcd) processes simultaneously prevents the pre-conversion analysis from running as expected. To prevent this issue, run only one rhcd process at a time. ( HMS-2629 ) 7.5. Obtaining support If you experience problems during the conversion, notify Red Hat so that these problems can be addressed. Important If you are experiencing problems during the conversion, raise a Support case of Severity 3 or Severity 4 level only. For more details, see Production Support Terms of Service . Prerequisites The sos package is installed. You must use this package to generate an sosreport that is required when opening a support case for the Red Hat Support team. Procedure To obtain support, perform either of the following steps: Open a support case: Select the appropriate version of RHEL as the product, and provide an sosreport from your system. Generate an sosreport on your system: Note that you can leave the case ID empty. Submit a bug report : Open a bug, select the appropriate version of RHEL as the product, and select convert2rhel as the component. For details on generating an sosreport , For more information, see the Red Hat Knowledgebase solution What is an sosreport and how to create one in Red Hat Enterprise Linux? . For more information about opening and managing a support case on the Customer Portal, see the article How do I open and manage a support case on the Customer Portal? . For information about Red Hat's support policy for Linux distribution conversions, see Convert2RHEL Support Policy . | [
"yum check dependencies",
"rhc disconnect",
"sed -ie 's%error%trace%' /etc/rhc/config.toml",
"insights-client --register rhc connect -a <activation_key> -o <organization_ID>",
"journalctl -u rhcd less /var/log/insights-client/insights-client.log",
"systemctl stop rhcd",
"sed -ie 's%error%trace%' /etc/rhc/config.toml",
"systemctl start rhcd",
"journalctl -u rhcd",
"less /var/log/rhc-worker-script/rhc-worker-script.log",
"curl -o /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release https://security.access.redhat.com/data/fd431d51.txt",
"curl -o /etc/yum.repos.d/client-tools.repo https://cdn-public.redhat.com/content/public/repofiles/client-tools-for-rhel-7-server.repo",
"curl -o /etc/yum.repos.d/client-tools.repo https://cdn-public.redhat.com/content/public/repofiles/client-tools-for-rhel-8.repo",
"curl -o /etc/yum.repos.d/client-tools.repo https://ftp.redhat.com/redhat/client-tools/client-tools-for-rhel-9.repo",
"sed -i 's%\\USDreleasever% <release_version> %' /etc/yum.repos.d/client-tools.repo",
"yum -y install subscription-manager subscription-manager-rhsm-certificates",
"subscription-manager register --org <organization_id> --activationkey <activation_key>",
"convert2rhel",
"yum install -y shim-x64",
"sosreport"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility_in_red_hat_insights/assembly_troubleshooting-rhel-conversions_converting-from-a-linux-distribution-to-rhel-in-insights |
5.5. Diagnosing Problems | 5.5. Diagnosing Problems The Enterprise Security Client includes basic diagnostic tools and a simple interface to log errors and common events, such as inserting and removing a smart card or changing the card's password. The diagnostic tools can identify and notify users about problems with the Enterprise Security Client, smart cards, and TPS connections. To open the Diagnostics Information window: Open the Smart Card Manager UI. Select the smart card to check from the list. Click the Diagnostics button. This opens the Diagnostic Information window for the selected smart card. The Diagnostics Information screen displays the following information: The Enterprise Security Client version number (listed as the Smart Card Manager version). The version information for the XULRunner framework upon which the client is running. The number of cards detected by the Enterprise Security Client. For each card detected, the following information is displayed: The version of the applet running on the smart card. The alpha-numeric ID of the smart card. The card's status, which can be any of the three things: NO_APPLET No key was detected. UNINITIALIZED. The key was detected, but no certificates have been enrolled. ENROLLED. The detected card has been enrolled with certificate and card information. The card's Phone Home URL. This is the URL from which all Phone Home information is obtained. The card issuer name, such as Example Corp. The card's answer-to-reset (ATR) string. This is a unique value that can be used to identify different classes of smart cards. For example: The TPS Phone Home URL. The TPS server URL. This is retrieved through Phone Home. The TPS enrollment form URL. This is retrieved through Phone Home. Detailed information about each certificate contained on the card. A running log of the most recent Enterprise Security Client errors and common events. The Enterprise Security Client records two types of diagnostic information. It records errors that are returned by the smart card, and it records events that have occurred through the Enterprise Security Client. It also returns basic information about the smart card configuration. 5.5.1. Errors The Enterprise Security Client does not recognize a card. Problems occur during a smart card operation, such as a certificate enrollment, password reset, or format operation. The Enterprise Security Client loses the connection to the smart card. This can happen when problems occur communicating with the PCSC daemon. The connection between the Enterprise Security Client and TPS is lost. Smart cards can report certain error codes to the TPS; these are recorded in the TPS's tps-debug.log or tps-error.log files, depending on the cause for the message. Table 5.1. Smart Card Error Codes Return Code Description General Error Codes 6400 No specific diagnosis 6700 Wrong length in Lc 6982 Security status not satisfied 6985 Conditions of use not satisfied 6a86 Incorrect P1 P2 6d00 Invalid instruction 6e00 Invalid class Install Load Errors 6581 Memory Failure 6a80 Incorrect parameters in data field 6a84 Not enough memory space 6a88 Referenced data not found Delete Errors 6200 Application has been logically deleted 6581 Memory failure 6985 Referenced data cannot be deleted 6a88 Referenced data not found 6a82 Application not found 6a80 Incorrect values in command data Get Data Errors 6a88 Referenced data not found Get Status Errors 6310 More data available 6a88 Referenced data not found 6a80 Incorrect values in command data Load Errors 6581 Memory failure 6a84 Not enough memory space 6a86 Incorrect P1/P2 6985 Conditions of use not satisfied 5.5.2. Events Simple events such as card insertions and removals, successfully completed operations, card operations that result in an error, and similar events. Errors are reported from the TPS to the Enterprise Security Client. The NSS crypto library is initialized. Other low-level smart card events are detected. | [
"3BEC00FF8131FE45A0000000563333304A330600A1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/using_the_enterprise_security_client-diagnosing_problems |
OAuth APIs | OAuth APIs OpenShift Container Platform 4.16 Reference guide for Oauth APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/oauth_apis/index |
Chapter 3. Container Images Based on Red Hat Software Collections 3.8 | Chapter 3. Container Images Based on Red Hat Software Collections 3.8 Component Description Supported architectures Daemon Images rhscl/nginx-120-rhel7 nginx 1.20 server and a reverse proxy server x86_64, s390x, ppc64le Database Images rhscl/redis-6-rhel7 Redis 6 key-value store x86_64, s390x, ppc64le Red Hat Developer Toolset Images rhscl/devtoolset-12-toolchain-rhel7 (available since November 2022) Red Hat Developer Toolset toolchain x86_64, s390x, ppc64le rhscl/devtoolset-12-perftools-rhel7 (available since November 2022) Red Hat Developer Toolset perftools x86_64, s390x, ppc64le rhscl/devtoolset-11-toolchain-rhel7 Red Hat Developer Toolset toolchain (EOL) x86_64, s390x, ppc64le rhscl/devtoolset-11-perftools-rhel7 Red Hat Developer Toolset perftools (EOL) x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.8 , see the Red Hat Software Collections 3.8 Release Notes . For more information about the Red Hat Developer Toolset 11 components, see the Red Hat Developer Toolset 11 User Guide . For information about the Red Hat Developer Toolset 12 components, see the Red Hat Developer Toolset 12 User Guide . EOL images are no longer supported. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/RHSCL_3.8_images |
Chapter 4. AMQ Interconnect deployment guidelines | Chapter 4. AMQ Interconnect deployment guidelines To plan your router network and design the network topology, you must first understand the different router modes and how you can use them to create different types of networks. 4.1. Router operating modes In AMQ Interconnect, each router can operate in standalone , interior , or edge mode. In a router network, you deploy multiple interior routers or a combination of interior and edge routers to create the desired network topology. Standalone The router operates as a single, standalone network node. A standalone router cannot be used in a router network - it does not establish connections with other routers, and only routes messages between directly-connected endpoints. Interior The router is part of the interior of the router network. Interior routers establish connections with each other and automatically compute the lowest cost paths across the network. Edge The router maintains a single uplink connection to one or more interior routers. Edge routers do not participate in the routing protocol or route computation, but they enable you to efficiently scale the routing network. Note Performance of your router network is determined by various factors: topology number of routers underlying infrastructure (host resources, network speed, etc) 4.2. Security guidelines In the router network, the interior routers should be secured with a strong authentication mechanism in which they identify themselves to each other. You should choose and plan this authentication mechanism before creating the router network. Warning If the interior routers are not properly secured, unauthorized routers (or endpoints pretending to be routers) could join the router network, compromising its integrity and availability. You can choose a security mechanism that best fits your requirements. However, you should consider the following recommendations: Create an X.509 Certificate Authority (CA) to oversee the interior portion of the router network. Generate an individual certificate for each interior router. Each interior router can be configured to use the CA to authenticate connections from any other interior routers. Note Connections from edge routers and clients can use different levels of security, depending on your requirements. By using these recommendations, a new interior router cannot join the network until the owner of the CA issues a new certificate for the new router. In addition, an intruder wishing to spoof an interior router cannot do so because it would not have a valid X.509 certificate issued by the network's CA. 4.3. Router connection guidelines Before creating a router network, you should understand how routers connect to each other, and the factors that affect the direction in which an inter-router connection should be established. Inter-router connections are bidirectional When a connection is established between routers, message traffic flows in both directions across that connection. Each connection has a client side (a connector ) and a server side (a listener ) for the purposes of connection establishment. Once the connection is established, the two sides become equal participants in a bidirectional connection. For the purposes of routing AMQP traffic across the network, the direction of connection establishment is not relevant. Factors that affect the direction of connection establishment When establishing inter-router connections, you must choose which router will be the "listener" and which will be the "connector". There should be only one connection between any pair of routers. When determining the direction of inter-router connections in the network topology, consider the following factors: IP network boundaries and firewalls Generally, inter-router connections should always be established from more private to more public. For example, to connect a router in a private IP network to another router in a public location (such as a public cloud provider), the router in the private network must have the connector and the router in the public location must have the listener. This is because the public location cannot reach the private location by TCP/IP without the use of VPNs or other firewall features designed to allow public-to-private access. Network topology The topology of the router network may affect the direction in which connections should be established between the routers. For example, a star-topology that has a series of routers connected to one or two central "hub" routers should have listeners on the hub and connectors on the spokes. That way, new spoke routers may be added without changing the configuration of the hub. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/router-deployment-guidelines-router-rhel |
Chapter 4. Streams for Apache Kafka Bridge API Reference | Chapter 4. Streams for Apache Kafka Bridge API Reference 4.1. Overview The Streams for Apache Kafka Bridge provides a REST API for integrating HTTP based client applications with a Kafka cluster. You can use the API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. 4.1.1. Version information Version : 0.1.0 4.1.2. Tags Consumers : Consumer operations to create consumers in your Kafka cluster and perform common actions, such as subscribing to topics, retrieving processed records, and committing offsets. Producer : Producer operations to send records to a specified topic or topic partition. Seek : Seek operations that enable a consumer to begin receiving messages from a given offset position. Topics : Topic operations to send messages to a specified topic or topic partition, optionally including message keys in requests. You can also retrieve topics and topic metadata. 4.1.3. Consumes application/json 4.1.4. Produces application/json 4.2. Definitions 4.2.1. AssignedTopicPartitions Type : < string, < integer (int32) > array > map 4.2.2. BridgeInfo Information about Kafka Bridge instance. Name Schema bridge_version optional string 4.2.3. Consumer Name Description Schema auto.offset.reset optional Resets the offset position for the consumer. If set to latest (default), messages are read from the latest offset. If set to earliest , messages are read from the first offset. string consumer.request.timeout.ms optional Sets the maximum amount of time, in milliseconds, for the consumer to wait for messages for a request. If the timeout period is reached without a response, an error is returned. Default is 30000 (30 seconds). integer enable.auto.commit optional If set to true (default), message offsets are committed automatically for the consumer. If set to false , message offsets must be committed manually. boolean fetch.min.bytes optional Sets the minimum amount of data, in bytes, for the consumer to receive. The broker waits until the data to send exceeds this amount. Default is 1 byte. integer format optional The allowable message format for the consumer, which can be binary (default) or json . The messages are converted into a JSON format. string isolation.level optional If set to read_uncommitted (default), all transaction records are retrieved, indpendent of any transaction outcome. If set to read_committed , the records from committed transactions are retrieved. string name optional The unique name for the consumer instance. The name is unique within the scope of the consumer group. The name is used in URLs. If a name is not specified, a randomly generated name is assigned. string 4.2.4. ConsumerRecord Name Schema headers optional KafkaHeaderList offset optional integer (int64) partition optional integer (int32) topic optional string 4.2.5. ConsumerRecordList Type : < ConsumerRecord > array 4.2.6. CreatedConsumer Name Description Schema base_uri optional Base URI used to construct URIs for subsequent requests against this consumer instance. string instance_id optional Unique ID for the consumer instance in the group. string 4.2.7. Error Name Schema error_code optional integer (int32) message optional string 4.2.8. KafkaHeader Name Description Schema key required string value required The header value in binary format, base64-encoded Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD" string (byte) 4.2.9. KafkaHeaderList Type : < KafkaHeader > array 4.2.10. OffsetCommitSeek Name Schema offset required integer (int64) partition required integer (int32) topic required string 4.2.11. OffsetCommitSeekList Name Schema offsets optional < OffsetCommitSeek > array 4.2.12. OffsetRecordSent Name Schema offset optional integer (int64) partition optional integer (int32) 4.2.13. OffsetRecordSentList Name Schema offsets optional < OffsetRecordSent > array 4.2.14. OffsetsSummary Name Schema beginning_offset optional integer (int64) end_offset optional integer (int64) 4.2.15. Partition Name Schema partition optional integer (int32) topic optional string 4.2.16. PartitionMetadata Name Schema leader optional integer (int32) partition optional integer (int32) replicas optional < Replica > array 4.2.17. Partitions Name Schema partitions optional < Partition > array 4.2.18. ProducerRecord Name Schema headers optional KafkaHeaderList partition optional integer (int32) 4.2.19. ProducerRecordList Name Schema records optional < ProducerRecord > array 4.2.20. ProducerRecordToPartition Name Schema headers optional KafkaHeaderList 4.2.21. ProducerRecordToPartitionList Name Schema records optional < ProducerRecordToPartition > array 4.2.22. Replica Name Schema broker optional integer (int32) in_sync optional boolean leader optional boolean 4.2.23. SubscribedTopicList Name Schema partitions optional < AssignedTopicPartitions > array topics optional Topics 4.2.24. TopicMetadata Name Description Schema configs optional Per-topic configuration overrides < string, string > map name optional Name of the topic string partitions optional < PartitionMetadata > array 4.2.25. Topics Name Description Schema topic_pattern optional A regex topic pattern for matching multiple topics string topics optional < string > array 4.3. Paths 4.3.1. GET / 4.3.1.1. Description Retrieves information about the Kafka Bridge instance, in JSON format. 4.3.1.2. Responses HTTP Code Description Schema 200 Information about Kafka Bridge instance. BridgeInfo 4.3.1.3. Produces application/json 4.3.1.4. Example HTTP response 4.3.1.4.1. Response 200 { "bridge_version" : "0.16.0" } 4.3.2. POST /consumers/{groupid} 4.3.2.1. Description Creates a consumer instance in the given consumer group. You can optionally specify a consumer name and supported configuration options. It returns a base URI which must be used to construct URLs for subsequent requests against this consumer instance. 4.3.2.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group in which to create the consumer. string Body body required Name and configuration of the consumer. The name is unique within the scope of the consumer group. If a name is not specified, a randomly generated name is assigned. All parameters are optional. The supported configuration options are shown in the following example. Consumer 4.3.2.3. Responses HTTP Code Description Schema 200 Consumer created successfully. CreatedConsumer 409 A consumer instance with the specified name already exists in the Kafka Bridge. Error 422 One or more consumer configuration options have invalid values. Error 4.3.2.4. Consumes application/vnd.kafka.v2+json 4.3.2.5. Produces application/vnd.kafka.v2+json 4.3.2.6. Tags Consumers 4.3.2.7. Example HTTP request 4.3.2.7.1. Request body { "name" : "consumer1", "format" : "binary", "auto.offset.reset" : "earliest", "enable.auto.commit" : false, "fetch.min.bytes" : 512, "consumer.request.timeout.ms" : 30000, "isolation.level" : "read_committed" } 4.3.2.8. Example HTTP response 4.3.2.8.1. Response 200 { "instance_id" : "consumer1", "base_uri" : "http://localhost:8080/consumers/my-group/instances/consumer1" } 4.3.2.8.2. Response 409 { "error_code" : 409, "message" : "A consumer instance with the specified name already exists in the Kafka Bridge." } 4.3.2.8.3. Response 422 { "error_code" : 422, "message" : "One or more consumer configuration options have invalid values." } 4.3.3. DELETE /consumers/{groupid}/instances/{name} 4.3.3.1. Description Deletes a specified consumer instance. The request for this operation MUST use the base URL (including the host and port) returned in the response from the POST request to /consumers/{groupid} that was used to create this consumer. 4.3.3.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the consumer belongs. string Path name required Name of the consumer to delete. string 4.3.3.3. Responses HTTP Code Description Schema 204 Consumer removed successfully. No Content 404 The specified consumer instance was not found. Error 4.3.3.4. Consumes application/vnd.kafka.v2+json 4.3.3.5. Produces application/vnd.kafka.v2+json 4.3.3.6. Tags Consumers 4.3.3.7. Example HTTP response 4.3.3.7.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.4. POST /consumers/{groupid}/instances/{name}/assignments 4.3.4.1. Description Assigns one or more topic partitions to a consumer. 4.3.4.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the consumer belongs. string Path name required Name of the consumer to assign topic partitions to. string Body body required List of topic partitions to assign to the consumer. Partitions 4.3.4.3. Responses HTTP Code Description Schema 204 Partitions assigned successfully. No Content 404 The specified consumer instance was not found. Error 409 Subscriptions to topics, partitions, and patterns are mutually exclusive. Error 4.3.4.4. Consumes application/vnd.kafka.v2+json 4.3.4.5. Produces application/vnd.kafka.v2+json 4.3.4.6. Tags Consumers 4.3.4.7. Example HTTP request 4.3.4.7.1. Request body { "partitions" : [ { "topic" : "topic", "partition" : 0 }, { "topic" : "topic", "partition" : 1 } ] } 4.3.4.8. Example HTTP response 4.3.4.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.4.8.2. Response 409 { "error_code" : 409, "message" : "Subscriptions to topics, partitions, and patterns are mutually exclusive." } 4.3.5. POST /consumers/{groupid}/instances/{name}/offsets 4.3.5.1. Description Commits a list of consumer offsets. To commit offsets for all records fetched by the consumer, leave the request body empty. 4.3.5.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the consumer belongs. string Path name required Name of the consumer. string Body body optional List of consumer offsets to commit to the consumer offsets commit log. You can specify one or more topic partitions to commit offsets for. OffsetCommitSeekList 4.3.5.3. Responses HTTP Code Description Schema 204 Commit made successfully. No Content 404 The specified consumer instance was not found. Error 4.3.5.4. Consumes application/vnd.kafka.v2+json 4.3.5.5. Produces application/vnd.kafka.v2+json 4.3.5.6. Tags Consumers 4.3.5.7. Example HTTP request 4.3.5.7.1. Request body { "offsets" : [ { "topic" : "topic", "partition" : 0, "offset" : 15 }, { "topic" : "topic", "partition" : 1, "offset" : 42 } ] } 4.3.5.8. Example HTTP response 4.3.5.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.6. POST /consumers/{groupid}/instances/{name}/positions 4.3.6.1. Description Configures a subscribed consumer to fetch offsets from a particular offset the time it fetches a set of records from a given topic partition. This overrides the default fetch behavior for consumers. You can specify one or more topic partitions. 4.3.6.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the consumer belongs. string Path name required Name of the subscribed consumer. string Body body required List of partition offsets from which the subscribed consumer will fetch records. OffsetCommitSeekList 4.3.6.3. Responses HTTP Code Description Schema 204 Seek performed successfully. No Content 404 The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. Error 4.3.6.4. Consumes application/vnd.kafka.v2+json 4.3.6.5. Produces application/vnd.kafka.v2+json 4.3.6.6. Tags Consumers Seek 4.3.6.7. Example HTTP request 4.3.6.7.1. Request body { "offsets" : [ { "topic" : "topic", "partition" : 0, "offset" : 15 }, { "topic" : "topic", "partition" : 1, "offset" : 42 } ] } 4.3.6.8. Example HTTP response 4.3.6.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.7. POST /consumers/{groupid}/instances/{name}/positions/beginning 4.3.7.1. Description Configures a subscribed consumer to seek (and subsequently read from) the first offset in one or more given topic partitions. 4.3.7.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the subscribed consumer. string Body body required List of topic partitions to which the consumer is subscribed. The consumer will seek the first offset in the specified partitions. Partitions 4.3.7.3. Responses HTTP Code Description Schema 204 Seek to the beginning performed successfully. No Content 404 The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. Error 4.3.7.4. Consumes application/vnd.kafka.v2+json 4.3.7.5. Produces application/vnd.kafka.v2+json 4.3.7.6. Tags Consumers Seek 4.3.7.7. Example HTTP request 4.3.7.7.1. Request body { "partitions" : [ { "topic" : "topic", "partition" : 0 }, { "topic" : "topic", "partition" : 1 } ] } 4.3.7.8. Example HTTP response 4.3.7.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.8. POST /consumers/{groupid}/instances/{name}/positions/end 4.3.8.1. Description Configures a subscribed consumer to seek (and subsequently read from) the offset at the end of one or more of the given topic partitions. 4.3.8.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the subscribed consumer. string Body body optional List of topic partitions to which the consumer is subscribed. The consumer will seek the last offset in the specified partitions. Partitions 4.3.8.3. Responses HTTP Code Description Schema 204 Seek to the end performed successfully. No Content 404 The specified consumer instance was not found, or the specified consumer instance did not have one of the specified partitions assigned. Error 4.3.8.4. Consumes application/vnd.kafka.v2+json 4.3.8.5. Produces application/vnd.kafka.v2+json 4.3.8.6. Tags Consumers Seek 4.3.8.7. Example HTTP request 4.3.8.7.1. Request body { "partitions" : [ { "topic" : "topic", "partition" : 0 }, { "topic" : "topic", "partition" : 1 } ] } 4.3.8.8. Example HTTP response 4.3.8.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.9. GET /consumers/{groupid}/instances/{name}/records 4.3.9.1. Description Retrieves records for a subscribed consumer, including message values, topics, and partitions. The request for this operation MUST use the base URL (including the host and port) returned in the response from the POST request to /consumers/{groupid} that was used to create this consumer. 4.3.9.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the subscribed consumer to retrieve records from. string Query max_bytes optional The maximum size, in bytes, of unencoded keys and values that can be included in the response. Otherwise, an error response with code 422 is returned. integer Query timeout optional The maximum amount of time, in milliseconds, that the HTTP Bridge spends retrieving records before timing out the request. integer 4.3.9.3. Responses HTTP Code Description Schema 200 Poll request executed successfully. ConsumerRecordList 404 The specified consumer instance was not found. Error 406 The format used in the consumer creation request does not match the embedded format in the Accept header of this request or the bridge got a message from the topic which is not JSON encoded. Error 422 Response exceeds the maximum number of bytes the consumer can receive Error 4.3.9.4. Produces application/vnd.kafka.json.v2+json application/vnd.kafka.binary.v2+json application/vnd.kafka.text.v2+json application/vnd.kafka.v2+json 4.3.9.5. Tags Consumers 4.3.9.6. Example HTTP response 4.3.9.6.1. Response 200 [ { "topic" : "topic", "key" : "key1", "value" : { "foo" : "bar" }, "partition" : 0, "offset" : 2 }, { "topic" : "topic", "key" : "key2", "value" : [ "foo2", "bar2" ], "partition" : 1, "offset" : 3 } ] [ { "topic": "test", "key": "a2V5", "value": "Y29uZmx1ZW50", "partition": 1, "offset": 100, }, { "topic": "test", "key": "a2V5", "value": "a2Fma2E=", "partition": 2, "offset": 101, } ] 4.3.9.6.2. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.9.6.3. Response 406 { "error_code" : 406, "message" : "The `format` used in the consumer creation request does not match the embedded format in the Accept header of this request." } 4.3.9.6.4. Response 422 { "error_code" : 422, "message" : "Response exceeds the maximum number of bytes the consumer can receive" } 4.3.10. POST /consumers/{groupid}/instances/{name}/subscription 4.3.10.1. Description Subscribes a consumer to one or more topics. You can describe the topics to which the consumer will subscribe in a list (of Topics type) or as a topic_pattern field. Each call replaces the subscriptions for the subscriber. 4.3.10.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the consumer to subscribe to topics. string Body body required List of topics to which the consumer will subscribe. Topics 4.3.10.3. Responses HTTP Code Description Schema 204 Consumer subscribed successfully. No Content 404 The specified consumer instance was not found. Error 409 Subscriptions to topics, partitions, and patterns are mutually exclusive. Error 422 A list (of Topics type) or a topic_pattern must be specified. Error 4.3.10.4. Consumes application/vnd.kafka.v2+json 4.3.10.5. Produces application/vnd.kafka.v2+json 4.3.10.6. Tags Consumers 4.3.10.7. Example HTTP request 4.3.10.7.1. Request body { "topics" : [ "topic1", "topic2" ] } 4.3.10.8. Example HTTP response 4.3.10.8.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.10.8.2. Response 409 { "error_code" : 409, "message" : "Subscriptions to topics, partitions, and patterns are mutually exclusive." } 4.3.10.8.3. Response 422 { "error_code" : 422, "message" : "A list (of Topics type) or a topic_pattern must be specified." } 4.3.11. GET /consumers/{groupid}/instances/{name}/subscription 4.3.11.1. Description Retrieves a list of the topics to which the consumer is subscribed. 4.3.11.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the subscribed consumer. string 4.3.11.3. Responses HTTP Code Description Schema 200 List of subscribed topics and partitions. SubscribedTopicList 404 The specified consumer instance was not found. Error 4.3.11.4. Produces application/vnd.kafka.v2+json 4.3.11.5. Tags Consumers 4.3.11.6. Example HTTP response 4.3.11.6.1. Response 200 { "topics" : [ "my-topic1", "my-topic2" ], "partitions" : [ { "my-topic1" : [ 1, 2, 3 ] }, { "my-topic2" : [ 1 ] } ] } 4.3.11.6.2. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.12. DELETE /consumers/{groupid}/instances/{name}/subscription 4.3.12.1. Description Unsubscribes a consumer from all topics. 4.3.12.2. Parameters Type Name Description Schema Path groupid required ID of the consumer group to which the subscribed consumer belongs. string Path name required Name of the consumer to unsubscribe from topics. string 4.3.12.3. Responses HTTP Code Description Schema 204 Consumer unsubscribed successfully. No Content 404 The specified consumer instance was not found. Error 4.3.12.4. Tags Consumers 4.3.12.5. Example HTTP response 4.3.12.5.1. Response 404 { "error_code" : 404, "message" : "The specified consumer instance was not found." } 4.3.13. GET /healthy 4.3.13.1. Description Check if the bridge is running. This does not necessarily imply that it is ready to accept requests. 4.3.13.2. Responses HTTP Code Description Schema 204 The bridge is healthy No Content 500 The bridge is not healthy No Content 4.3.14. GET /metrics 4.3.14.1. Description Retrieves the bridge metrics in Prometheus format. 4.3.14.2. Responses HTTP Code Description Schema 200 Metrics in Prometheus format retrieved successfully. string 4.3.14.3. Produces text/plain 4.3.15. GET /openapi 4.3.15.1. Description Retrieves the OpenAPI v2 specification in JSON format. 4.3.15.2. Responses HTTP Code Description Schema 204 OpenAPI v2 specification in JSON format retrieved successfully. string 4.3.15.3. Produces application/json 4.3.16. GET /ready 4.3.16.1. Description Check if the bridge is ready and can accept requests. 4.3.16.2. Responses HTTP Code Description Schema 204 The bridge is ready No Content 500 The bridge is not ready No Content 4.3.17. GET /topics 4.3.17.1. Description Retrieves a list of all topics. 4.3.17.2. Responses HTTP Code Description Schema 200 List of topics. < string > array 4.3.17.3. Produces application/vnd.kafka.v2+json 4.3.17.4. Tags Topics 4.3.17.5. Example HTTP response 4.3.17.5.1. Response 200 [ "topic1", "topic2" ] 4.3.18. POST /topics/{topicname} 4.3.18.1. Description Sends one or more records to a given topic, optionally specifying a partition, key, or both. 4.3.18.2. Parameters Type Name Description Schema Path topicname required Name of the topic to send records to or retrieve metadata from. string Query async optional Whether to return immediately upon sending records, instead of waiting for metadata. No offsets will be returned if specified. Defaults to false. boolean Body body required ProducerRecordList 4.3.18.3. Responses HTTP Code Description Schema 200 Records sent successfully. OffsetRecordSentList 404 The specified topic was not found. Error 422 The record list is not valid. Error 4.3.18.4. Consumes application/vnd.kafka.json.v2+json application/vnd.kafka.binary.v2+json application/vnd.kafka.text.v2+json 4.3.18.5. Produces application/vnd.kafka.v2+json 4.3.18.6. Tags Producer Topics 4.3.18.7. Example HTTP request 4.3.18.7.1. Request body { "records" : [ { "key" : "key1", "value" : "value1" }, { "value" : "value2", "partition" : 1 }, { "value" : "value3" } ] } 4.3.18.8. Example HTTP response 4.3.18.8.1. Response 200 { "offsets" : [ { "partition" : 2, "offset" : 0 }, { "partition" : 1, "offset" : 1 }, { "partition" : 2, "offset" : 2 } ] } 4.3.18.8.2. Response 404 { "error_code" : 404, "message" : "The specified topic was not found." } 4.3.18.8.3. Response 422 { "error_code" : 422, "message" : "The record list contains invalid records." } 4.3.19. GET /topics/{topicname} 4.3.19.1. Description Retrieves the metadata about a given topic. 4.3.19.2. Parameters Type Name Description Schema Path topicname required Name of the topic to send records to or retrieve metadata from. string 4.3.19.3. Responses HTTP Code Description Schema 200 Topic metadata TopicMetadata 4.3.19.4. Produces application/vnd.kafka.v2+json 4.3.19.5. Tags Topics 4.3.19.6. Example HTTP response 4.3.19.6.1. Response 200 { "name" : "topic", "offset" : 2, "configs" : { "cleanup.policy" : "compact" }, "partitions" : [ { "partition" : 1, "leader" : 1, "replicas" : [ { "broker" : 1, "leader" : true, "in_sync" : true }, { "broker" : 2, "leader" : false, "in_sync" : true } ] }, { "partition" : 2, "leader" : 2, "replicas" : [ { "broker" : 1, "leader" : false, "in_sync" : true }, { "broker" : 2, "leader" : true, "in_sync" : true } ] } ] } 4.3.20. GET /topics/{topicname}/partitions 4.3.20.1. Description Retrieves a list of partitions for the topic. 4.3.20.2. Parameters Type Name Description Schema Path topicname required Name of the topic to send records to or retrieve metadata from. string 4.3.20.3. Responses HTTP Code Description Schema 200 List of partitions < PartitionMetadata > array 404 The specified topic was not found. Error 4.3.20.4. Produces application/vnd.kafka.v2+json 4.3.20.5. Tags Topics 4.3.20.6. Example HTTP response 4.3.20.6.1. Response 200 [ { "partition" : 1, "leader" : 1, "replicas" : [ { "broker" : 1, "leader" : true, "in_sync" : true }, { "broker" : 2, "leader" : false, "in_sync" : true } ] }, { "partition" : 2, "leader" : 2, "replicas" : [ { "broker" : 1, "leader" : false, "in_sync" : true }, { "broker" : 2, "leader" : true, "in_sync" : true } ] } ] 4.3.20.6.2. Response 404 { "error_code" : 404, "message" : "The specified topic was not found." } 4.3.21. POST /topics/{topicname}/partitions/{partitionid} 4.3.21.1. Description Sends one or more records to a given topic partition, optionally specifying a key. 4.3.21.2. Parameters Type Name Description Schema Path partitionid required ID of the partition to send records to or retrieve metadata from. integer Path topicname required Name of the topic to send records to or retrieve metadata from. string Query async optional Whether to return immediately upon sending records, instead of waiting for metadata. No offsets will be returned if specified. Defaults to false. boolean Body body required List of records to send to a given topic partition, including a value (required) and a key (optional). ProducerRecordToPartitionList 4.3.21.3. Responses HTTP Code Description Schema 200 Records sent successfully. OffsetRecordSentList 404 The specified topic partition was not found. Error 422 The record is not valid. Error 4.3.21.4. Consumes application/vnd.kafka.json.v2+json application/vnd.kafka.binary.v2+json application/vnd.kafka.text.v2+json 4.3.21.5. Produces application/vnd.kafka.v2+json 4.3.21.6. Tags Producer Topics 4.3.21.7. Example HTTP request 4.3.21.7.1. Request body { "records" : [ { "key" : "key1", "value" : "value1" }, { "value" : "value2" } ] } 4.3.21.8. Example HTTP response 4.3.21.8.1. Response 200 { "offsets" : [ { "partition" : 2, "offset" : 0 }, { "partition" : 1, "offset" : 1 }, { "partition" : 2, "offset" : 2 } ] } 4.3.21.8.2. Response 404 { "error_code" : 404, "message" : "The specified topic partition was not found." } 4.3.21.8.3. Response 422 { "error_code" : 422, "message" : "The record is not valid." } 4.3.22. GET /topics/{topicname}/partitions/{partitionid} 4.3.22.1. Description Retrieves partition metadata for the topic partition. 4.3.22.2. Parameters Type Name Description Schema Path partitionid required ID of the partition to send records to or retrieve metadata from. integer Path topicname required Name of the topic to send records to or retrieve metadata from. string 4.3.22.3. Responses HTTP Code Description Schema 200 Partition metadata PartitionMetadata 404 The specified topic partition was not found. Error 4.3.22.4. Produces application/vnd.kafka.v2+json 4.3.22.5. Tags Topics 4.3.22.6. Example HTTP response 4.3.22.6.1. Response 200 { "partition" : 1, "leader" : 1, "replicas" : [ { "broker" : 1, "leader" : true, "in_sync" : true }, { "broker" : 2, "leader" : false, "in_sync" : true } ] } 4.3.22.6.2. Response 404 { "error_code" : 404, "message" : "The specified topic partition was not found." } 4.3.23. GET /topics/{topicname}/partitions/{partitionid}/offsets 4.3.23.1. Description Retrieves a summary of the offsets for the topic partition. 4.3.23.2. Parameters Type Name Description Schema Path partitionid required ID of the partition. integer Path topicname required Name of the topic containing the partition. string 4.3.23.3. Responses HTTP Code Description Schema 200 A summary of the offsets for the topic partition. OffsetsSummary 404 The specified topic partition was not found. Error 4.3.23.4. Produces application/vnd.kafka.v2+json 4.3.23.5. Tags Topics 4.3.23.6. Example HTTP response 4.3.23.6.1. Response 200 { "beginning_offset" : 10, "end_offset" : 50 } 4.3.23.6.2. Response 404 { "error_code" : 404, "message" : "The specified topic partition was not found." } | [
"{ \"bridge_version\" : \"0.16.0\" }",
"{ \"name\" : \"consumer1\", \"format\" : \"binary\", \"auto.offset.reset\" : \"earliest\", \"enable.auto.commit\" : false, \"fetch.min.bytes\" : 512, \"consumer.request.timeout.ms\" : 30000, \"isolation.level\" : \"read_committed\" }",
"{ \"instance_id\" : \"consumer1\", \"base_uri\" : \"http://localhost:8080/consumers/my-group/instances/consumer1\" }",
"{ \"error_code\" : 409, \"message\" : \"A consumer instance with the specified name already exists in the Kafka Bridge.\" }",
"{ \"error_code\" : 422, \"message\" : \"One or more consumer configuration options have invalid values.\" }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 409, \"message\" : \"Subscriptions to topics, partitions, and patterns are mutually exclusive.\" }",
"{ \"offsets\" : [ { \"topic\" : \"topic\", \"partition\" : 0, \"offset\" : 15 }, { \"topic\" : \"topic\", \"partition\" : 1, \"offset\" : 42 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"offsets\" : [ { \"topic\" : \"topic\", \"partition\" : 0, \"offset\" : 15 }, { \"topic\" : \"topic\", \"partition\" : 1, \"offset\" : 42 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"partitions\" : [ { \"topic\" : \"topic\", \"partition\" : 0 }, { \"topic\" : \"topic\", \"partition\" : 1 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"[ { \"topic\" : \"topic\", \"key\" : \"key1\", \"value\" : { \"foo\" : \"bar\" }, \"partition\" : 0, \"offset\" : 2 }, { \"topic\" : \"topic\", \"key\" : \"key2\", \"value\" : [ \"foo2\", \"bar2\" ], \"partition\" : 1, \"offset\" : 3 } ]",
"[ { \"topic\": \"test\", \"key\": \"a2V5\", \"value\": \"Y29uZmx1ZW50\", \"partition\": 1, \"offset\": 100, }, { \"topic\": \"test\", \"key\": \"a2V5\", \"value\": \"a2Fma2E=\", \"partition\": 2, \"offset\": 101, } ]",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 406, \"message\" : \"The `format` used in the consumer creation request does not match the embedded format in the Accept header of this request.\" }",
"{ \"error_code\" : 422, \"message\" : \"Response exceeds the maximum number of bytes the consumer can receive\" }",
"{ \"topics\" : [ \"topic1\", \"topic2\" ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 409, \"message\" : \"Subscriptions to topics, partitions, and patterns are mutually exclusive.\" }",
"{ \"error_code\" : 422, \"message\" : \"A list (of Topics type) or a topic_pattern must be specified.\" }",
"{ \"topics\" : [ \"my-topic1\", \"my-topic2\" ], \"partitions\" : [ { \"my-topic1\" : [ 1, 2, 3 ] }, { \"my-topic2\" : [ 1 ] } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"{ \"error_code\" : 404, \"message\" : \"The specified consumer instance was not found.\" }",
"[ \"topic1\", \"topic2\" ]",
"{ \"records\" : [ { \"key\" : \"key1\", \"value\" : \"value1\" }, { \"value\" : \"value2\", \"partition\" : 1 }, { \"value\" : \"value3\" } ] }",
"{ \"offsets\" : [ { \"partition\" : 2, \"offset\" : 0 }, { \"partition\" : 1, \"offset\" : 1 }, { \"partition\" : 2, \"offset\" : 2 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic was not found.\" }",
"{ \"error_code\" : 422, \"message\" : \"The record list contains invalid records.\" }",
"{ \"name\" : \"topic\", \"offset\" : 2, \"configs\" : { \"cleanup.policy\" : \"compact\" }, \"partitions\" : [ { \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }, { \"partition\" : 2, \"leader\" : 2, \"replicas\" : [ { \"broker\" : 1, \"leader\" : false, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : true, \"in_sync\" : true } ] } ] }",
"[ { \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }, { \"partition\" : 2, \"leader\" : 2, \"replicas\" : [ { \"broker\" : 1, \"leader\" : false, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : true, \"in_sync\" : true } ] } ]",
"{ \"error_code\" : 404, \"message\" : \"The specified topic was not found.\" }",
"{ \"records\" : [ { \"key\" : \"key1\", \"value\" : \"value1\" }, { \"value\" : \"value2\" } ] }",
"{ \"offsets\" : [ { \"partition\" : 2, \"offset\" : 0 }, { \"partition\" : 1, \"offset\" : 1 }, { \"partition\" : 2, \"offset\" : 2 } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }",
"{ \"error_code\" : 422, \"message\" : \"The record is not valid.\" }",
"{ \"partition\" : 1, \"leader\" : 1, \"replicas\" : [ { \"broker\" : 1, \"leader\" : true, \"in_sync\" : true }, { \"broker\" : 2, \"leader\" : false, \"in_sync\" : true } ] }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }",
"{ \"beginning_offset\" : 10, \"end_offset\" : 50 }",
"{ \"error_code\" : 404, \"message\" : \"The specified topic partition was not found.\" }"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_bridge/api_reference-bridge |
Chapter 2. AMQP | Chapter 2. AMQP Since Camel 1.2 Both producer and consumer are supported The AMQP component supports the AMQP 1.0 protocol using the JMS Client API of the Qpid project. 2.1. Dependencies When using camel-amqp with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-amqp-starter</artifactId> </dependency> 2.2. URI format amqp:[queue:|topic:]destinationName[?options] 2.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 2.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 2.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 2.4. Component Options The AMQP component supports 100 options, which are listed below. Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String includeAmqpAnnotations (common) Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. false boolean jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToConsumerType (consumer (advanced)) The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowAutoWiredConnectionFactory (advanced) Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true boolean allowAutoWiredDestinationResolver (advanced) Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared JMS configuration. JmsConfiguration destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues. QueueBrowseStrategy receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 2.5. Endpoint Options The AMQP endpoint is configured using URI syntax: with the following path and query parameters: 2.5.1. Path Parameters (2 parameters) Name Description Default Type destinationType (common) The kind of destination to use. Enum values: queue topic temp-queue temp-topic queue String destinationName (common) Required Name of the queue or topic to use as destination. String 2.5.2. Query Parameters (96 parameters) Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToConsumerType (consumer (advanced)) The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 2.6. Usage As AMQP component is inherited from JMS component, the usage of the former is almost identical to the latter: Using AMQP component // Consuming from AMQP queue from("amqp:queue:incoming"). to(...); // Sending message to the AMQP topic from(...). to("amqp:topic:notify"); 2.7. Configuring AMQP component Creating AMQP 1.0 component AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent("amqp://localhost:5672", "user", "password"); You can also add an instance of org.apache.camel.component.amqp.AMQPConnectionDetails to the registry in order to automatically configure the AMQP component. For example for Spring Boot you just have to define bean: AMQP connection details auto-configuration @Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672", "username", "password"); } Likewise, you can also use CDI producer methods when using Camel-CDI AMQP connection details auto-configuration for CDI @Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } You can also rely on the to read the AMQP connection details. Factory method AMQPConnectionDetails.discoverAMQP() attempts to read Camel properties in a Kubernetes-like convention, just as demonstrated on the snippet below: AMQP connection details auto-configuration export AMQP_SERVICE_HOST = "mybroker.com" export AMQP_SERVICE_PORT = "6666" export AMQP_SERVICE_USERNAME = "username" export AMQP_SERVICE_PASSWORD = "password" ... @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); } Enabling AMQP specific options If you, for example, need to enable amqp.traceFrames you can do that by appending the option to your URI, like the following example: AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672?amqp.traceFrames=true"); For reference refer QPID JMS client configuration . 2.8. Using topics To have using topics working with camel-amqp you need to configure the component to use topic:// as topic prefix, as shown below: <bean id="amqp" class="org.apache.camel.component.amqp.AmqpComponent"> <property name="connectionFactory"> <bean class="org.apache.qpid.jms.JmsConnectionFactory" factory-method="createFromURL"> <property name="remoteURI" value="amqp://localhost:5672" /> <property name="topicPrefix" value="topic://" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean> Keep in mind that both AMQPComponent#amqpComponent() methods and AMQPConnectionDetails pre-configure the component with the topic prefix, so you don't have to configure it explicitly. 2.9. Spring Boot Auto-Configuration The component supports 101 options, which are listed below. Name Description Default Type camel.component.amqp.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.amqp.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. AUTO_ACKNOWLEDGE String camel.component.amqp.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.amqp.allow-auto-wired-connection-factory Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true Boolean camel.component.amqp.allow-auto-wired-destination-resolver Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true Boolean camel.component.amqp.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.amqp.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.amqp.allow-serialized-headers Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.amqp.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false Boolean camel.component.amqp.artemis-consumer-priority Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). Integer camel.component.amqp.artemis-streaming-enabled Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false Boolean camel.component.amqp.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.amqp.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.amqp.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.amqp.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.amqp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.amqp.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.amqp.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.amqp.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String camel.component.amqp.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.amqp.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. JmsConfiguration camel.component.amqp.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. ConnectionFactory camel.component.amqp.consumer-type The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.amqp.correlation-property When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String camel.component.amqp.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutorType camel.component.amqp.delivery-delay Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 Long camel.component.amqp.delivery-mode Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.amqp.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.amqp.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. DestinationResolver camel.component.amqp.disable-reply-to Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false Boolean camel.component.amqp.disable-time-to-live Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false Boolean camel.component.amqp.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.amqp.eager-loading-of-properties Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false Boolean camel.component.amqp.eager-poison-body If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String camel.component.amqp.enabled Whether to enable auto configuration of the amqp component. This is enabled by default. Boolean camel.component.amqp.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.amqp.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.amqp.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.amqp.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. ExceptionListener camel.component.amqp.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.amqp.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.amqp.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.amqp.format-date-headers-to-iso8601 Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false Boolean camel.component.amqp.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.amqp.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.amqp.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.amqp.include-all-jmsx-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.amqp.include-amqp-annotations Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. false Boolean camel.component.amqp.include-sent-jms-message-id Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.amqp.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy camel.component.amqp.jms-message-type Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType camel.component.amqp.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.amqp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.amqp.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.amqp.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.amqp.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.amqp.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. MessageConverter camel.component.amqp.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. MessageCreatedStrategy camel.component.amqp.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true Boolean camel.component.amqp.message-listener-container-factory Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. MessageListenerContainerFactory camel.component.amqp.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true Boolean camel.component.amqp.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.amqp.priority Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.amqp.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.amqp.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. QueueBrowseStrategy camel.component.amqp.receive-timeout The timeout for receiving messages (in milliseconds). The option is a long type. 1000 Long camel.component.amqp.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. 5000 Long camel.component.amqp.reply-to Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String camel.component.amqp.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.amqp.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.amqp.reply-to-consumer-type The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.amqp.reply-to-delivery-persistent Specifies whether to use persistent delivery by default for replies. true Boolean camel.component.amqp.reply-to-destination-selector-name Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String camel.component.amqp.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.amqp.reply-to-on-timeout-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.amqp.reply-to-override Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String camel.component.amqp.reply-to-same-destination-allowed Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false Boolean camel.component.amqp.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.amqp.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. 20000 Long camel.component.amqp.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. 1000 Long camel.component.amqp.selector Sets the JMS selector to use. String camel.component.amqp.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.amqp.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.amqp.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.amqp.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.amqp.synchronous Sets whether synchronous processing should be strictly used. false Boolean camel.component.amqp.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. TaskExecutor camel.component.amqp.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.amqp.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.amqp.transacted Specifies whether to use transacted mode. false Boolean camel.component.amqp.transacted-in-out Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false Boolean camel.component.amqp.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.amqp.transaction-name The name of the transaction to use. String camel.component.amqp.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.amqp.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false Boolean camel.component.amqp.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false Boolean camel.component.amqp.use-message-id-as-correlation-id Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.amqp.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.amqp.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. 100 Long | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-amqp-starter</artifactId> </dependency>",
"amqp:[queue:|topic:]destinationName[?options]",
"amqp:destinationType:destinationName",
"// Consuming from AMQP queue from(\"amqp:queue:incoming\"). to(...); // Sending message to the AMQP topic from(...). to(\"amqp:topic:notify\");",
"AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\", \"user\", \"password\");",
"@Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\", \"username\", \"password\"); }",
"@Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); }",
"export AMQP_SERVICE_HOST = \"mybroker.com\" export AMQP_SERVICE_PORT = \"6666\" export AMQP_SERVICE_USERNAME = \"username\" export AMQP_SERVICE_PASSWORD = \"password\" @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); }",
"AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672?amqp.traceFrames=true\");",
"<bean id=\"amqp\" class=\"org.apache.camel.component.amqp.AmqpComponent\"> <property name=\"connectionFactory\"> <bean class=\"org.apache.qpid.jms.JmsConnectionFactory\" factory-method=\"createFromURL\"> <property name=\"remoteURI\" value=\"amqp://localhost:5672\" /> <property name=\"topicPrefix\" value=\"topic://\" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-amqp-component-starter |
Chapter 2. Understanding ephemeral storage | Chapter 2. Understanding ephemeral storage 2.1. Overview In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following: Pods cannot detect how much local storage is available to them. Pods cannot request guaranteed local storage. Local storage is a best-effort resource. Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage is reclaimed. Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and OpenShift Container Platform. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows OpenShift Container Platform to schedule pods where appropriate, and to protect the node against excessive use of local storage. While the ephemeral storage framework allows administrators and developers to better manage local storage, I/O throughput and latency are not directly effected. 2.2. Types of ephemeral storage Ephemeral local storage is always made available in the primary partition. There are two basic ways of creating the primary partition: root and runtime. Root This partition holds the kubelet root directory, /var/lib/kubelet/ by default, and /var/log/ directory. This partition can be shared between user pods, the OS, and Kubernetes system daemons. This partition can be consumed by pods through EmptyDir volumes, container logs, image layers, and container-writable layers. Kubelet manages shared access and isolation of this partition. This partition is ephemeral, and applications cannot expect any performance SLAs, such as disk IOPS, from this partition. Runtime This is an optional partition that runtimes can use for overlay file systems. OpenShift Container Platform attempts to identify and provide shared access along with isolation to this partition. Container image layers and writable layers are stored here. If the runtime partition exists, the root partition does not hold any image layer or other writable storage. 2.3. Ephemeral storage management Cluster administrators can manage ephemeral storage within a project by setting quotas that define the limit ranges and number of requests for ephemeral storage across all pods in a non-terminal state. Developers can also set requests and limits on this compute resource at the pod and container level. You can manage local ephemeral storage by specifying requests and limits. Each container in a pod can specify the following: spec.containers[].resources.limits.ephemeral-storage spec.containers[].resources.requests.ephemeral-storage Limits and requests for ephemeral storage are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following quantities all represent approximately the same value: 128974848, 129e6, 129M, and 123Mi. The case of the suffixes is significant. If you specify 400m of ephemeral storage, this requests 0.4 bytes, rather than 400 mebibytes (400Mi) or 400 megabytes (400M), which was probably what was intended. The following example shows a pod with two containers. Each container requests 2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral storage. Therefore, the pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage. apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: "2Gi" 1 limits: ephemeral-storage: "4Gi" 2 volumeMounts: - name: ephemeral mountPath: "/tmp" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" 3 volumeMounts: - name: ephemeral mountPath: "/tmp" volumes: - name: ephemeral emptyDir: {} 1 3 Request for local ephemeral storage. 2 Limit for local ephemeral storage. This setting in the pod spec affects how the scheduler makes a decision on scheduling pods, and also how kubelet evict pods. First of all, the scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node. In this case, the pod can be assigned to a node only if its available ephemeral storage (allocatable resource) is more than 4GiB. Secondly, at the container level, since the first container sets resource limit, kubelet eviction manager measures the disk usage of this container and evicts the pod if the storage usage of this container exceeds its limit (4GiB). At the pod level, kubelet works out an overall pod storage limit by adding up the limits of all the containers in that pod. In this case, the total storage usage at the pod level is the sum of the disk usage from all containers plus the pod's emptyDir volumes. If this total usage exceeds the overall pod storage limit (4GiB), then the kubelet also marks the pod for eviction. For information about defining quotas for projects, see Quota setting per project . 2.4. Monitoring ephemeral storage You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers . The available space for only /var/lib/kubelet is shown when you use the df command if /var/lib/containers is placed on a separate disk by the cluster administrator. To show the human-readable values of used and available space in /var/lib , enter the following command: USD df -h /var/lib The output shows the ephemeral storage usage in /var/lib : Example output Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% / | [
"apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" 3 volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}",
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage/understanding-ephemeral-storage |
Chapter 6. DNS (designate) parameters | Chapter 6. DNS (designate) parameters You can modify the designate service with DNS parameters. Parameter Description DesignateBindQueryLogging Set to true to enable logging of queries on BIND. The default value is false . DesignateManagedResourceEmail Configure email address to be set in zone SOAs. Leaving unset results in service defaults being used. DesignateMdnsProxyBasePort Configure the base port for the MiniDNS proxy endpoints on the external/public access network. The default value is 16000 . DesignateMinTTL Configure the minimum allowable TTL in seconds. The default value is 0 which leaves the parameter unset. The default value is 0 . DesignateWorkers Number of workers for Designate services. The default value is 0 . UnboundAllowedCIDRs A list of CIDRs allowed to make queries through Unbound. Example, [ 192.0.2.0/24 , 198.51.100.0/24 ]. UnboundAllowRecursion When false, Unbound will not attempt to recursively resolve the request. It will only answer for queries using local information. The default value is true . UnboundDesignateIntegration Set to false to disable configuring neutron using the deployed unbound server as the default resolver. The default value is true . UnboundForwardFallback When true, if the forwarded query receives a SERVFAIL, Unbound will process the request as a standard recursive resolution. The default value is true . UnboundForwardResolvers A list of DNS resolver IP addresses, with optional port, that Unbound will forward resolution requests to if Unbound does not have the answer. Example, [ 192.0.2.10 , 192.0.2.20@53 ]. UnboundLogQueries If true, Unbound will log the query requests. The default value is false . UnboundSecurityHarden When true, Unbound will block certain queries that could have security implications to the Unbound service. The default value is true . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_dns-designate-parameters_overcloud_parameters |
Chapter 11. JGroups subsystem tuning | Chapter 11. JGroups subsystem tuning For optimal network performance it is recommended that you use UDP multicast for JGroups in environments that support it. Note TCP has more overhead and is often considered slower than UDP since it handles error checking, packet ordering, and congestion control itself. JGroups handles these features for UDP, whereas TCP guarantees them itself. TCP is a good choice when using JGroups on unreliable or high congestion networks, or when multicast is not available. This chapter assumes that you have chosen your JGroups stack transport protocol (UDP or TCP) and communications protocols that JGroups cluster communications will use. 11.1. Monitoring JGroups statistics You can enable statistics for the jgroups subsystem to monitor JBoss EAP clustering using the management CLI or through JMX. Note Enabling statistics adversely affects performance. Only enable statistics when necessary. Procedure Use the following command to enable statistics for a JGroups channel. Note In a managed domain, precede these commands with /profile=PROFILE_NAME . For example, use the following command to enable statistics for the default ee channel. Reload the JBoss EAP server. You can now see JGroups statistics using either the management CLI, or through JMX with a JVM monitoring tool: To use the management CLI, use the :read-resource(include-runtime=true) command on the JGroups channel or protocol that you want to see the statistics for. Note In a managed domain, precede these commands with /host=HOST_NAME/server=SERVER_NAME . For example: To see the statistics for the ee channel, use the following command: To see the statistics for the FD_ALL protocol in the ee channel, use the following command: To connect to JBoss EAP using a JVM monitoring tool, see the Monitoring Performance chapter. You can see the statistics on JGroups MBeans through the JMX connection. 11.2. Networking and jumbo frames Where possible, it is recommended that the network interface for JGroups traffic should be part of a dedicated Virtual Local Area Network (VLAN). This allows you to separate cluster communications from other JBoss EAP network traffic to more easily control cluster network performance, throughput, and security. Another network configuration to consider to improve cluster performance is to enable jumbo frames. If your network environment supports it, enabling jumbo frames by increasing the Maximum Transmission Unit (MTU) can help boost network performance, especially in high throughput environments. To use jumbo frames, all NICs and switches in your network must support it. Additional resources See the Red Hat Customer Portal for instructions on enabling jumbo frames for Red Hat Enterprise Linux . 11.3. Message bundling Message bundling in JGroups improves network performance by assembling multiple small messages into larger bundles. Rather than sending out many small messages over the network to cluster nodes, instead messages are queued until the maximum bundle size is reached or there are no more messages to send. The queued messages are assembled into a larger message bundle and then sent. This bundling reduces communications overhead, especially in TCP environments where there is a higher overhead for network communications. 11.3.1. Configuring message bundling JGroups message bundling is configured using the max_bundle_size property. The default max_bundle_size is 64KB. The performance improvements of tuning the bundle size depend on your environment, and whether more efficient network traffic is balanced against a possible delay of communications while the bundle is assembled. Procedure Use the following management CLI command to configure max_bundle_size . For example, to set max_bundle_size to 60K for the default udp stack: 11.4. JGroups thread pools The jgroups subsystem uses its own thread pools for processing cluster communication. JGroups contains thread pools for default , internal , oob , and timer functions which you can configure individually. Each JGroups thread pool includes configurable attributes for keepalive-time , max-threads , min-threads , and queue-length . Appropriate values for each thread pool attribute depend on your environment, but for most situations the default values should suffice. 11.5. JGroups send and receive buffers The jgroups subsystem has configurable send and receive buffers for both UDP and TCP stacks. Appropriate values for JGroups buffers depend on your environment, but for most situations the default values should suffice. It is recommended that you test your cluster under load in a development environment to tune appropriate values for the buffer sizes. Note Your operating system may limit the available buffer sizes and JBoss EAP may not be able to use its configured buffer values. | [
"/subsystem=jgroups/channel=CHANNEL_NAME:write-attribute(name=statistics-enabled,value=true)",
"/subsystem=jgroups/channel=ee:write-attribute(name=statistics-enabled,value=true)",
"reload",
"/subsystem=jgroups/channel=ee:read-resource(include-runtime=true)",
"/subsystem=jgroups/channel=ee/protocol=FD_ALL:read-resource(include-runtime=true)",
"/subsystem=jgroups/stack=STACK_NAME/transport=TRANSPORT_TYPE/property=max_bundle_size:add(value=BUNDLE_SIZE)",
"/subsystem=jgroups/stack=udp/transport=UDP/property=max_bundle_size:add(value=60K)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/assembly-jgroups-tuning_performance-tuning-guide |
Chapter 10. Using cups-browsed to locally integrate printers from a remote print server | Chapter 10. Using cups-browsed to locally integrate printers from a remote print server The cups-browsed service uses DNS service discovery (DNS-SD) and CUPS browsing to make all or a filtered subset of shared remote printers automatically available in a local CUPS service. For example, administrators can use this feature on workstations to make only printers from a trusted print server available in a print dialog of applications. It is also possible to configure cups-browsed to filter the browsed printers by certain criteria to reduce the number of listed printers if a print server shares a large number of printers. Note If the print dialog in an application uses other mechanisms than, for example DNS-SD, to list remote printers, cups-browsed has no influence. The cups-browsed service also does not prevent users from manually accessing non-listed printers. Prerequisites The CUPS service is configured on the local host . A remote CUPS print server exists, and the following conditions apply to this server: The server listens on an interface that is accessible from the client. The Allow from parameter in the server's <Location /> directive in the /etc/cups/cups.conf file allows access from the client's IP address. The server shares printers. Firewall rules allow access from the client to the CUPS port on the server. Procedure Edit the /etc/cups/cups-browsed.conf file, and make the following changes: Add BrowsePoll parameters for each remote CUPS server you want to poll: Append : <port> to the hostname or IP address if the remote CUPS server listens on a port different from 631. Optional: Configure a filter to limit which printers are shown in the local CUPS service. For example, to filter for queues whose name contain sales_ , add: You can filter by different field names, negate the filter, and match the exact values. For further details, see the parameter description and examples in the cups-browsed.conf(5) man page on your system. Optional: Change the polling interval and timeout to limit the number of browsing cycles: Increase both BrowseInterval and BrowseTimeout in the same ratio to avoid situations in which printers disappear from the browsing list. This mean, multiply the value of BrowseInterval by 5 or a higher integer, and use this result value for BrowseTimeout . By default, cups-browsed polls remote servers every 60 seconds and the timeout is 300 seconds. However, on print servers with many queues, these default values can cost many resources. Enable and start the cups-browsed service: Verification List the available printers: If the output for a printer contains implicitclass , cups-browsed manages the printer in CUPS. Additional resources cups-browsed.conf(5) man page on your system | [
"BrowsePoll remote_cups_server.example.com BrowsePoll 192.0.2.100:1631",
"BrowseFilter name sales_",
"BrowseInterval 1200 BrowseTimeout 6000",
"systemctl enable --now cups-browsed",
"lpstat -v device for Demo-printer : implicitclass:// Demo-printer /"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/using-cups-browsed-to-locally-integrate-printers-from-a-remote-print-server_configuring-printing |
function::warn | function::warn Name function::warn - Send a line to the warning stream Synopsis Arguments msg The formatted message string Description This function sends a warning message immediately to staprun. It is also sent over the bulk transport (relayfs) if it is being used. If the last characater is not a newline, the one is added. | [
"warn(msg:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-warn |
Chapter 1. About Observability | Chapter 1. About Observability Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help users quickly diagnose and troubleshoot issues before they impact systems or applications. To help ensure the reliability, performance, and security of your applications and infrastructure, OpenShift Container Platform offers the following Observability components: Monitoring Logging Distributed tracing Red Hat build of OpenTelemetry Network Observability Power monitoring Red Hat OpenShift Observability connects open-source observability tools and technologies to create a unified Observability solution. The components of Red Hat OpenShift Observability work together to help you collect, store, deliver, analyze, and visualize data. Note With the exception of monitoring, Red Hat OpenShift Observability components have distinct release cycles separate from the core OpenShift Container Platform release cycles. See the Red Hat OpenShift Operator Life Cycles page for their release compatibility. 1.1. Monitoring Monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage. Monitoring stack components are deployed and managed by the Cluster Monitoring Operator. Monitoring stack components are deployed by default in every OpenShift Container Platform installation and are managed by the Cluster Monitoring Operator (CMO). These components include Prometheus, Alertmanager, Thanos Querier, and others. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. For more information, see About OpenShift Container Platform monitoring and About remote health monitoring . 1.2. Logging Collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats. In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. 1.3. Distributed tracing Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use it for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. For more information, see Distributed tracing architecture . 1.4. Red Hat build of OpenTelemetry Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open-source back ends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. For more information, see Red Hat build of OpenTelemetry . 1.5. Network Observability Observe the network traffic for OpenShift Container Platform clusters and create network flows with the Network Observability Operator. View and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see Network Observability overview . 1.6. Power monitoring Monitor the power usage of workloads and identify the most power-consuming namespaces running in a cluster with key power consumption metrics, such as CPU or DRAM measured at the container level. Visualize energy-related system statistics with the Power monitoring Operator. For more information, see Power monitoring overview . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/observability_overview/observability-overview |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/proc_providing-feedback-on-red-hat-documentation_managing-monitoring-and-updating-the-kernel |
7.2. Configure Bonding Using the Text User Interface, nmtui | 7.2. Configure Bonding Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure bonding in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 7.1. The NetworkManager Text User Interface Add a Bond Connection menu Select Bond and then Create ; the Edit connection screen for the bond will open. Figure 7.2. The NetworkManager Text User Interface Configuring a Bond Connection menu At this point port interfaces will need to be added to the bond; to add these select Add , the New Connection screen opens. Once the type of Connection has been chosen select the Create button. Figure 7.3. The NetworkManager Text User Interface Configuring a New Bond Slave Connection menu The port's Edit Connection display appears; enter the required port's device name or MAC address in the Device section. If required, enter a clone MAC address to be used as the bond's MAC address by selecting Show to the right of the Ethernet label. Select the OK button to save the port. Note If the device is specified without a MAC address the Device section will be automatically populated once the Edit Connection window is reloaded, but only if it successfully finds the device. Figure 7.4. The NetworkManager Text User Interface Configuring a Bond Slave Connection menu The name of the bond port appears in the Slaves section. Repeat the above steps to add further port connections. Review and confirm the settings before selecting the OK button. Figure 7.5. The NetworkManager Text User Interface Completed Bond See Section 7.8.1.1, "Configuring the Bond Tab" for definitions of the bond terms. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui . | [
"~]USD nmtui"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configure_Bonding_Using_the_Text_User_Interface_nmtui |
1.7. Security Cannot be an Afterthought | 1.7. Security Cannot be an Afterthought No matter what you might think about the environment in which your systems are running, you cannot take security for granted. Even standalone systems not connected to the Internet may be at risk (although obviously the risks will be different from a system that has connections to the outside world). Therefore, it is extremely important to consider the security implications of everything you do. The following list illustrates the different kinds of issues you should consider: The nature of possible threats to each of the systems under your care The location, type, and value of the data on those systems The type and frequency of authorized access to the systems While you are thinking about security, do not make the mistake of assuming that possible intruders will only attack your systems from outside of your company. Many times the perpetrator is someone within the company. So the time you walk around the office, look at the people around you and ask yourself this question: What would happen if that person were to attempt to subvert our security? Note This does not mean that you should treat your coworkers as if they are criminals. It just means that you should look at the type of work that each person performs and determine what types of security breaches a person in that position could perpetrate, if they were so inclined. 1.7.1. The Risks of Social Engineering While most system administrators' first reactions when they think about security is to concentrate on the technological aspects, it is important to maintain perspective. Quite often, security breaches do not have their origins in technology, but in human nature. People interested in breaching security often use human nature to entirely bypass technological access controls. This is known as social engineering . Here is an example: The second shift operator receives an outside phone call. The caller claims to be your organization's CFO (the CFO's name and background information was obtained from your organization's website, on the "Management Team" page). The caller claims to be calling from some place halfway around the world (maybe this part of the story is a complete fabrication, or perhaps your organization's website has a recent press release that makes mention of the CFO attending a tradeshow). The caller tells a tale of woe; his laptop was stolen at the airport, and he is with an important customer and needs access to the corporate intranet to check on the customer's account status. Would the operator be so kind as to give him the necessary access information? Do you know what would your operator do? Unless your operator has guidance (in the form of policies and procedures), you very likely do not know for sure. Like traffic lights, the goal of policies and procedures is to provide unambiguous guidance as to what is and is not appropriate behavior. However, just as with traffic lights, policies and procedures only work if everyone follows them. And there is the crux of the problem -- it is unlikely that everyone will adhere to your policies and procedures. In fact, depending on the nature of your organization, it is possible that you do not even have sufficient authority to define policies, much less enforce them. What then? Unfortunately, there are no easy answers. User education can help; do everything you can to help make your user community aware of security and social engineering. Give lunchtime presentations about security. Post pointers to security-related news articles on your organization's mailing lists. Make yourself available as a sounding board for users' questions about things that do not seem quite right. In short, get the message out to your users any way you can. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-philosophy-security |
Chapter 8. API configuration examples | Chapter 8. API configuration examples 8.1. external_registry_config object reference { "is_enabled": True, "external_reference": "quay.io/redhat/quay", "sync_interval": 5000, "sync_start_date": datetime(2020, 0o1, 0o2, 6, 30, 0), "external_registry_username": "fakeUsername", "external_registry_password": "fakePassword", "external_registry_config": { "verify_tls": True, "unsigned_images": False, "proxy": { "http_proxy": "http://insecure.proxy.corp", "https_proxy": "https://secure.proxy.corp", "no_proxy": "mylocalhost", }, }, } 8.2. rule_rule object reference { "root_rule": {"rule_kind": "tag_glob_csv", "rule_value": ["latest", "foo", "bar"]}, } | [
"{ \"is_enabled\": True, \"external_reference\": \"quay.io/redhat/quay\", \"sync_interval\": 5000, \"sync_start_date\": datetime(2020, 0o1, 0o2, 6, 30, 0), \"external_registry_username\": \"fakeUsername\", \"external_registry_password\": \"fakePassword\", \"external_registry_config\": { \"verify_tls\": True, \"unsigned_images\": False, \"proxy\": { \"http_proxy\": \"http://insecure.proxy.corp\", \"https_proxy\": \"https://secure.proxy.corp\", \"no_proxy\": \"mylocalhost\", }, }, }",
"{ \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [\"latest\", \"foo\", \"bar\"]}, }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_api_guide/api-config-examples |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes included in the latest OpenJDK 11.0.17 release of Eclipse Temurin, see OpenJDK 11.0.17 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.17 release: Disabled cpu.shares parameter Before the OpenJDK 11.0.17 release, OpenJDK used an incorrect interpretation of the cpu.shares parameter, which belongs to Linux control groups, also known as cgroups . The parameter might cause a Java Virtual machine (JVM) to use fewer CPUs than available, which can impact the JVM's CPU resources and performance when it operates inside a container. The OpenJDK 11.0.17 release configures a JVM to no longer use the cpu.shares parameter when determining the number of threads for a thread pool. If you want to revert this configuration, pass the -XX:+UseContainerCpuShares argument on JVM startup. Note The -XX:+UseContainerCpuShares argument is a deprecated feature and might be removed in a future OpenJDK release. See JDK-8281181 (JDK Bug System). jdk.httpserver.maxConnections system property OpenJDK 11.0.17 adds a new system property, jdk.httpserver.maxConnections , that fixes a security issue where no connection limits were specified for the HttpServer service, which can cause accepted connections and established connections to remain open indefinitely. You can use the jdk.httpserver.maxConnections system property to change the HttpServer service, behavior in the following ways: Set a value of 0 or a negative value, such as -1 , to specify no connection limit for the service. Set a positive value, such as 1 , to cause the service to check any accepted connection against the current count of established connections. If the established connection for the service is reached, the service immediately closes the accepted connection. See JDK-8286918 (JDK Bug System). Monitor deserialization of objects with JFR You can now monitor deserialization of objects with the JDK Flight Recorder (JFR). By default, OpenJDK 11.0.17 disables the jdk.deserialization event setting for JFR. You can enable this feature by updating the event-name element in your JFR configuration. For example: <?xml version="1.0" encoding="UTF-8"?> <configuration version="2.0" description="test"> <event name="jdk.Deserialization"> <setting name="enabled">true</setting> <setting name="stackTrace">false</setting> </event> </configuration> After you enable JFR and you configure JFR to monitor deserialization events, JFR creates an event whenever a monitored application attempts to deserialize an object. The serialization filter mechanism of JFR can then determine whether to accept or reject a deserialized object from the monitored application. See JDK-8261160 (JDK Bug System). SHA-1 Signed JARs With the OpenJDK 11.0.17 release, JARs signed with SHA-1 algorithms are restricted by default and treated as if they were unsigned. These restrictions apply to the following algorithms: Algorithms used to digest, sign, and optionally timestamp the JAR. Signature and digest algorithms of the certificates in the certificate chain of the code signer and the Timestamp Authority, and any Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) responses that are used to verify if those certificates have been revoked. Additionally, the restrictions apply to signed Java Cryptography Extension (JCE) providers. To reduce the compatibility risk for JARs that have been previously timestamped, the restriction does not apply to any JAR signed with SHA-1 algorithms and timestamped prior to January 01, 2019 . This exception might be removed in a future OpenJDK release. To determine if your JAR file is impacted by the restriction, you can issue the following command in your CLI: From the output of the command, search for instance of SHA1 , SHA-1 , or disabled . Additionally, search for any warning messages that indicate that the JAR will be treated as unsigned. For example: Consider replacing or re-signing any JARs affected by the new restrictions with stronger algorithms. If your JAR file is impacted by this restriction, you can remove the algorithm and re-sign the file with a stronger algorithm, such as SHA-256 . If you want to remove the restriction on SHA-1 signed JARs for OpenJDK 11.0.17, and you accept the security risks, you can complete the following actions: Modify the java.security configuration file. Alternatively, you can preserve this file and instead create another file with the required configurations. Remove the SHA1 usage SignedJAR & denyAfter 2019 01 011 entry from the jdk.certpath.disabledAlgorithms security property. Remove the SHA1 denyAfter 2019-01-01 entry from the jdk.jar.disabledAlgorithms security property. Note The value of jdk.certpath.disabledAlgorithms in the java.security file might be overridden by the system security policy on RHEL 8 and 9. The values used by the system security policy can be seen in the file /etc/crypto-policies/back-ends/java.config and disabled by either setting security.useSystemPropertiesFile to false in the java.security file or passing -Djava.security.disableSystemPropertiesFile=true to the JVM. These values are not modified by this release, so the values remain the same for releases of OpenJDK. For an example of configuring the java.security file, see Overriding java.security properties for JBoss EAP for OpenShift (Red Hat Customer Portal). See JDK-8269039 (JDK Bug System). System properties for controlling the keep-alive behavior of HTTPURLConnection The OpenJDK 11.0.17 release includes the following new system properties that you can use to control the keep-alive behavior of HTTPURLConnection : http.keepAlive.time.server , which controls connections to servers. http.keepAlive.time.proxy , which controls connections to proxies. Before the OpenJDK 11.0.17 release, a server or a proxy with an unspecified keep-alive time might cause an idle connection to remain open for a period defined by a hard-coded default value. With OpenJDK 11.0.17, you can use system properties to change the default value for the keep-alive time. The keep-alive properties control this behavior by changing the HTTP keep-alive time of either a server or proxy, so that OpenJDK's HTTP protocol handler closes idle connections after a specified number of seconds. Before the OpenJDK 11.0.17 release, the following use cases would lead to specific keep-alive behaviors for HTTPURLConnection : If the server specifies the Connection:keep-alive header and the server's response contains Keep-alive:timeout=N then the OpenJDK keep-alive cache on the client uses a timeout of N seconds, where N is an integer value. If the server specifies the Connection:keep-alive header, but the server's response does not contain an entry for Keep-alive:timeout=N then the OpenJDK keep-alive cache on the client uses a timeout of 60 seconds for a proxy and 5 seconds for a server. If the server does not specify the Connection:keep-alive header, the OpenJDK keep-alive cache on the client uses a timeout of 5 seconds for all connections. The OpenJDK 11.0.17 release maintains the previously described behavior, but you can now specify the timeouts in the second and third listed use cases by using the http.keepAlive.time.server and http.keepAlive.time.proxy properties, rather than having to rely on the default settings. Note If you set the keep-alive property and the server specifies a keep-alive time for the Keep-Alive response header, the HTTP protocol handler uses the time specified by the server. This situation is identical for a proxy. See JDK-8278067 (JDK Bug System). Updated the default PKCS #12 MAC algorithm The OpenJDK 11.0.17 updates the default Message Authentication Code (MAC) algorithm for the PKCS #12 keystore to use the SHA-256 cryptographic hash function rather than the SHA-1 function. The SHA-256 function provides a stronger way to secure data. You can view this update in the keystore.pkcs12.macAlgorithm and the keystore.pkcs12.maclterationCount system properties. If you create a keystore with this updated MAC algorithm, and you attempt to use the keystore with an OpenJDK version earlier than OpenJDK 11.0.12, you would receive a java.security.NoSuchAlgorithmException message. To use the keystore with an OpenJDK version that is earlier than OpenJDK 11.0.12, set the keystore.pkcs12.legacy system property to true to revert the MAC algorithm. See JDK-8267880 (JDK Bug System). Deprecated and removed features Review the following release notes to understand pre-existing features that have been either deprecated or removed in the OpenJDK 11.0.17 release: Deprecated Kerberos encryption types OpenJDK 11.0.17 deprecates des3-hmac-sha1 and rc4-hmac Kerberos encryption types. By default, OpenJDK 11.0.17 disables these encryption types, but you can enable them by completing the following action: In the krb5.conf configuration file, set the allow_weak_crypto tab to true . This configuration also enables other encryption types, such as des-cbc-crc and des-cbc-md5 . Warning Before you apply this configuration, consider the risks of enabling all of these weak Kerberos encryption types, such as introducing weak encryption algorithms to your Kerberos's authentication mechanism. You can disable a subset of weak encryption types by explicitly listing an encryption type in one of the following krb5.conf configuration file's settings: default_tkt_enctypes default_tgs_enctypes permitted_enctypes See JDK-8139348 (JDK Bug System). Revised on 2024-05-09 16:45:57 UTC | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration version=\"2.0\" description=\"test\"> <event name=\"jdk.Deserialization\"> <setting name=\"enabled\">true</setting> <setting name=\"stackTrace\">false</setting> </event> </configuration>",
"jarsigner -verify -verbose -certs",
"Signed by \"CN=\"Signer\"\" Digest algorithm: SHA-1 (disabled) Signature algorithm: SHA1withRSA (disabled), 2048-bit key WARNING: The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled by the security property: jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024, DSA keySize < 1024, SHA1 denyAfter 2019-01-01"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.17/openjdk-temurin-features-11-0-17_openjdk |
Chapter 2. System integration with Maven | Chapter 2. System integration with Maven Red Hat Process Automation Manager is designed to be used with Red Hat JBoss Middleware Maven Repository and Maven Central repository as dependency sources. Ensure that both the dependencies are available for projects builds. Ensure that your project depends on specific versions of an artifact. LATEST or RELEASE are commonly used to specify and manage dependency versions in your application. LATEST refers to the latest deployed (snapshot) version of an artifact. RELEASE refers to the last non-snapshot version release in the repository. By using LATEST or RELEASE , you do not have to update version numbers when a new release of a third-party library is released, however, you lose control over your build being affected by a software release. 2.1. Preemptive authentication for local projects If your environment does not have access to the internet, set up an in-house Nexus and use it instead of Maven Central or other public repositories. To import JARs from the remote Maven repository of Red Hat Process Automation Manager server to a local Maven project, turn on pre-emptive authentication for the repository server. You can do this by configuring authentication for guvnor-m2-repo in the pom.xml file as shown below: <server> <id>guvnor-m2-repo</id> <username>admin</username> <password>admin</password> <configuration> <wagonProvider>httpclient</wagonProvider> <httpConfiguration> <all> <usePreemptive>true</usePreemptive> </all> </httpConfiguration> </configuration> </server> Alternatively, you can set Authorization HTTP header with Base64 encoded credentials: <server> <id>guvnor-m2-repo</id> <configuration> <httpHeaders> <property> <name>Authorization</name> <!-- Base64-encoded "admin:admin" --> <value>Basic YWRtaW46YWRtaW4=</value> </property> </httpHeaders> </configuration> </server> 2.2. Duplicate GAV detection in Business Central In Business Central, all Maven repositories are checked for any duplicated GroupId , ArtifactId , and Version (GAV) values in a project. If a GAV duplicate exists, the performed operation is canceled. Note Duplicate GAV detection is disabled for projects in Development Mode . To enable duplicate GAV detection in Business Central, go to project Settings General Settings Version and toggle the Development Mode option to OFF (if applicable). Duplicate GAV detection is executed every time you perform the following operations: Save a project definition for the project. Save the pom.xml file. Install, build, or deploy a project. The following Maven repositories are checked for duplicate GAVs: Repositories specified in the <repositories> and <distributionManagement> elements of the pom.xml file. Repositories specified in the Maven settings.xml configuration file. 2.3. Managing duplicate GAV detection settings in Business Central Business Central users with the admin role can modify the list of repositories that are checked for duplicate GroupId , ArtifactId , and Version (GAV) values for a project. Note Duplicate GAV detection is disabled for projects in Development Mode . To enable duplicate GAV detection in Business Central, go to project Settings General Settings Version and toggle the Development Mode option to OFF (if applicable). Procedure In Business Central, go to Menu Design Projects and click the project name. Click the project Settings tab and then click Validation to open the list of repositories. Select or clear any of the listed repository options to enable or disable duplicate GAV detection. In the future, duplicate GAVs will be reported for only the repositories you have enabled for validation. Note To disable this feature, set the org.guvnor.project.gav.check.disabled system property to true for Business Central at system startup: | [
"<server> <id>guvnor-m2-repo</id> <username>admin</username> <password>admin</password> <configuration> <wagonProvider>httpclient</wagonProvider> <httpConfiguration> <all> <usePreemptive>true</usePreemptive> </all> </httpConfiguration> </configuration> </server>",
"<server> <id>guvnor-m2-repo</id> <configuration> <httpHeaders> <property> <name>Authorization</name> <!-- Base64-encoded \"admin:admin\" --> <value>Basic YWRtaW46YWRtaW4=</value> </property> </httpHeaders> </configuration> </server>",
"~/EAP_HOME/bin/standalone.sh -c standalone-full.xml -Dorg.guvnor.project.gav.check.disabled=true"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/maven-integration-ref_execution-server |
Backup and restore | Backup and restore OpenShift Container Platform 4.17 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team | [
"oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'",
"2022-08-05T14:37:50Zuser@user:~ USD 1",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done",
"ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned",
"for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1",
"Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel.",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.30.3 ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.30.3 ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.30.3",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.30.3 ip-10-0-182-134.ec2.internal Ready worker 64m v1.30.3 ip-10-0-250-100.ec2.internal Ready worker 64m v1.30.3",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm uncordon USD{node} ; done",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 59m cloud-credential 4.17.0 True False False 85m cluster-autoscaler 4.17.0 True False False 73m config-operator 4.17.0 True False False 73m console 4.17.0 True False False 62m csi-snapshot-controller 4.17.0 True False False 66m dns 4.17.0 True False False 76m etcd 4.17.0 True False False 76m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.30.3 ip-10-0-170-223.ec2.internal Ready control-plane.master 82m v1.30.3 ip-10-0-179-95.ec2.internal Ready worker 70m v1.30.3 ip-10-0-182-134.ec2.internal Ready worker 70m v1.30.3 ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.30.3 ip-10-0-250-100.ec2.internal Ready worker 69m v1.30.3",
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 6 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade",
"oc apply -f enable-tech-preview-no-upgrade.yaml",
"oc get crd | grep backup",
"backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get nodes",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc",
"spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2",
"spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1",
"oc create -f etcd-recurring-backup.yaml",
"oc get cronjob -n openshift-etcd",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'",
"2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy",
"oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running",
"ip-10-0-131-183.ec2.internal stopped 1",
"oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable",
"ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1",
"oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"",
"ip-10-0-131-183.ec2.internal NotReady master 122m v1.30.3 1",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.30.3",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 6fc1e7c9db35841d",
"Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc delete node <node_name>",
"oc delete node ip-10-0-131-183.ec2.internal",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc debug node/ip-10-0-131-183.ec2.internal 1",
"sh-4.2# chroot /host",
"sh-4.2# mkdir /var/lib/etcd-backup",
"sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/",
"sh-4.2# mv /var/lib/etcd/ /tmp",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 62bcf33650a7170a",
"Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl endpoint health",
"https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+",
"sh-4.2# etcdctl member remove 7a8197040a5126c8",
"Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep openshift-control-plane-2",
"etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m",
"oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.17.0 True False False 3d15h",
"oc delete bmh openshift-control-plane-2 -n openshift-machine-api",
"baremetalhost.metal3.io \"openshift-control-plane-2\" deleted",
"oc delete machine -n openshift-machine-api examplecluster-control-plane-2",
"oc edit machine -n openshift-machine-api examplecluster-control-plane-2",
"finalizers: - machine.machine.openshift.io",
"machine.machine.openshift.io/examplecluster-control-plane-2 edited",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.30.3 openshift-control-plane-1 Ready master 3h24m v1.30.3 openshift-compute-0 Ready worker 176m v1.30.3 openshift-compute-1 Ready worker 176m v1.30.3",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get bmh -n openshift-machine-api",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get nodes",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.30.3 openshift-control-plane-1 Ready master 4h26m v1.30.3 openshift-control-plane-2 Ready master 12m v1.30.3 openshift-compute-0 Ready worker 3h58m v1.30.3 openshift-compute-1 Ready worker 3h58m v1.30.3",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+",
"etcdctl endpoint health --cluster",
"https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms",
"oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp",
"sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp",
"sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane",
"sudo rm -f /var/lib/ovn-ic/etc/*.db",
"sudo systemctl restart ovs-vswitchd ovsdb-server",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc adm wait-for-stable-cluster",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/backup_and_restore/index |
Chapter 22. Configuring Postfix MTA by using RHEL system roles | Chapter 22. Configuring Postfix MTA by using RHEL system roles You can use the postfix RHEL system role to consistently manage configurations of the Postfix mail transfer agent (MTA) in an automated fashion. Deploying such configurations are helpful when you need for example: Stable mail server: enables system administrators to configure a fast and scalable server for sending and receiving emails. Secure communication: supports features such as TLS encryption, authentication, domain blacklisting, and more, to ensure safe email transmission. Improved email management and routing: implements filters and rules so that you have control over your email traffic. Important The postfix_conf dictionary holds key-value pairs of the supported Postfix configuration parameters. Those keys that Postfix does not recognize as supported are ignored. The postfix RHEL system role directly passes the key-value pairs that you provide to the postfix_conf dictionary without verifying their syntax or limiting them. Therefore, the role is especially useful to those familiar with Postfix, and who know how to configure it. 22.1. Configuring Postfix as a null client for only sending outgoing emails A null client is a special configuration, where the Postfix server is set up only to send outgoing emails, but not receive any incoming emails. Such a setup is widely used in scenarios where you need to send notifications, alerts, or logs; but receiving or managing emails is not needed. By using Ansible and the postfix RHEL system role, you can automate this process and remotely configure the Postfix server as a null client for only sending outgoing emails. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage Postfix hosts: managed-node-01.example.com tasks: - name: Install postfix ansible.builtin.package: name: postfix state: present - name: Configure null client for only sending outgoing emails ansible.builtin.include_role: name: rhel-system-roles.postfix vars: postfix_conf: myhostname: server.example.com myorigin: "USDmydomain" relayhost: smtp.example.com inet_interfaces: loopback-only mydestination: "" relay_domains: "{{ lookup('ansible.builtin.pipe', 'postconf -h default_database_type') }}:/etc/postfix/relay_domains" postfix_files: - name: relay_domains postmap: true content: | example.com OK example.net OK The settings specified in the example playbook include the following: myhostname: <server.example.com> The internet hostname of this mail system. Defaults to the fully-qualified domain name (FQDN). myorigin: USDmydomain The domain name that locally-posted mail appears to come from and that locally posted mail is delivered to. Defaults to USDmyhostname . relayhost: <smtp.example.com> The -hop destination(s) for non-local mail, overrides non-local domains in recipient addresses. Defaults to an empty field. inet_interfaces: loopback-only Defines which network interfaces the Postfix server listens on for incoming email connections. It controls whether and how the Postfix server accepts email from the network. mydestination Defines which domains and hostnames are considered local. relay_domains: "hash:/etc/postfix/relay_domains" Specifies the domains that Postfix can forward emails to when it is acting as a relay server (SMTP relay). In this case the domains will be generated by the postfix_files variable. On RHEL 10, you have to use relay_domains: "lmdb:/etc/postfix/relay_domains" . postfix_files Defines a list of files that will be placed in the /etc/postfix/ directory. Those files can be converted into Postfix Lookup Tables if needed. In this case postfix_files generates domain names for the SMTP relay. For details about the role variables and the Postfix configuration parameters used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.postfix/README.md file and the postconf(5) manual page on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.postfix/README.md file /usr/share/doc/rhel-system-roles/postfix/ directory postconf(5) manual page on your system | [
"--- - name: Manage Postfix hosts: managed-node-01.example.com tasks: - name: Install postfix ansible.builtin.package: name: postfix state: present - name: Configure null client for only sending outgoing emails ansible.builtin.include_role: name: rhel-system-roles.postfix vars: postfix_conf: myhostname: server.example.com myorigin: \"USDmydomain\" relayhost: smtp.example.com inet_interfaces: loopback-only mydestination: \"\" relay_domains: \"{{ lookup('ansible.builtin.pipe', 'postconf -h default_database_type') }}:/etc/postfix/relay_domains\" postfix_files: - name: relay_domains postmap: true content: | example.com OK example.net OK",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/configuring-postfix-mta-by-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
Chapter 125. KafkaUserTemplate schema reference | Chapter 125. KafkaUserTemplate schema reference Used in: KafkaUserSpec Full list of KafkaUserTemplate schema properties Specify additional labels and annotations for the secret created by the User Operator. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... 125.1. KafkaUserTemplate schema properties Property Property type Description secret ResourceTemplate Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaUserTemplate-reference |
Chapter 10. Configuring trusted certificates for outgoing requests | Chapter 10. Configuring trusted certificates for outgoing requests When Red Hat build of Keycloak communicates with external services through TLS, it has to validate the remote server's certificate in order to ensure it is connecting to a trusted server. This is necessary in order to prevent man-in-the-middle attacks. The certificates of these remote server's or the CA that signed these certificates must be put in a truststore. This truststore is managed by the Keycloak server. The truststore is used when connecting securely to identity brokers, LDAP identity providers, when sending emails, and for backchannel communication with client applications. It is also useful when you want to change the policy on how host names are verified and trusted by the server. By default, a truststore provider is not configured, and any TLS/HTTPS connections fall back to standard Java Truststore configuration. If there is no trust established, then these outgoing requests will fail. 10.1. Configuring the Red Hat build of Keycloak Truststore You can add your truststore configuration by entering this command: bin/kc.[sh|bat] start --spi-truststore-file-file=myTrustStore.jks --spi-truststore-file-password=password --spi-truststore-file-hostname-verification-policy=ANY The following are possible configuration options for this setting: file The path to a Java keystore file. TLS requests need a way to verify the host of the server to which they are talking. This is what the truststore does. The keystore contains one or more trusted host certificates or certificate authorities. This truststore file should only contain public certificates of your secured hosts. This is REQUIRED if any of these properties are defined. password Password of the keystore. This option is REQUIRED if any of these properties are defined. hostname-verification-policy For HTTPS requests, this option verifies the hostname of the server's certificate. Default: WILDCARD ANY means that the hostname is not verified. WILDCARD allows wildcards in subdomain names, such as *.foo.com. When using STRICT , the Common Name (CN) must match the hostname exactly. Please note that this setting does not apply to LDAP secure connections, which require strict hostname checking. type The type of truststore, such as jks , pkcs12 or bcfks . If not provided, the type would be detected based on the truststore file extension or platform default type. 10.1.1. Example of a truststore configuration The following is an example configuration for a truststore that allows you to create trustful connections to all mycompany.org domains and its subdomains: bin/kc.[sh|bat] start --spi-truststore-file-file=path/to/truststore.jks --spi-truststore-file-password=change_me --spi-truststore-file-hostname-verification-policy=WILDCARD | [
"bin/kc.[sh|bat] start --spi-truststore-file-file=myTrustStore.jks --spi-truststore-file-password=password --spi-truststore-file-hostname-verification-policy=ANY",
"bin/kc.[sh|bat] start --spi-truststore-file-file=path/to/truststore.jks --spi-truststore-file-password=change_me --spi-truststore-file-hostname-verification-policy=WILDCARD"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/keycloak-truststore- |
Chapter 1. Introduction to director | Chapter 1. Introduction to director The Red Hat OpenStack Platform (RHOSP) director is a toolset for installing and managing a complete OpenStack environment. Director is based primarily on the OpenStack project TripleO. With director you can install a fully-operational, lean, and robust RHOSP environment that can provision and control bare metal systems to use as OpenStack nodes. Director uses two main concepts: an undercloud and an overcloud. First you install the undercloud, and then use the undercloud as a tool to install and configure the overcloud. 1.1. Undercloud The undercloud is the main management node that contains the Red Hat OpenStack Platform director toolset. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud). The components that form the undercloud have multiple functions: Environment planning The undercloud includes planning functions that you can use to create and assign certain node roles. The undercloud includes a default set of nodes: Compute, Controller, and various Storage roles. You can also design custom roles. Additionally, you can select which OpenStack Platform services to include on each node role, which provides a method to model new node types or isolate certain components on their own host. Bare metal system control The undercloud uses the out-of-band management interface, usually Intelligent Platform Management Interface (IPMI), of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack on each node. You can use this feature to provision bare metal systems as OpenStack nodes. For a full list of power management drivers, see Appendix A, Power management drivers . Orchestration The undercloud contains a set of YAML templates that represent a set of plans for your environment. The undercloud imports these plans and follows their instructions to create the resulting OpenStack environment. The plans also include hooks that you can use to incorporate your own customizations as certain points in the environment creation process. Undercloud components The undercloud uses OpenStack components as its base tool set. Each component operates within a separate container on the undercloud: OpenStack Identity (keystone) - Provides authentication and authorization for the director components. OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal nodes. OpenStack Networking (neutron) and Open vSwitch - Control networking for bare metal nodes. OpenStack Image Service (glance) - Stores images that director writes to bare metal machines. OpenStack Orchestration (heat) and Puppet - Provides orchestration of nodes and configuration of nodes after director writes the overcloud image to disk. OpenStack Telemetry (ceilometer) - Performs monitoring and data collection. Telemetry also includes the following components: OpenStack Telemetry Metrics (gnocchi) - Provides a time series database for metrics. OpenStack Telemetry Alarming (aodh) - Provide an alarming component for monitoring. OpenStack Telemetry Event Storage (panko) - Provides event storage for monitoring. OpenStack Workflow Service (mistral) - Provides a set of workflows for certain director-specific actions, such as importing and deploying plans. OpenStack Messaging Service (zaqar) - Provides a messaging service for the OpenStack Workflow Service. OpenStack Object Storage (swift) - Provides object storage for various OpenStack Platform components, including: Image storage for OpenStack Image Service Introspection data for OpenStack Bare Metal Deployment plans for OpenStack Workflow Service 1.2. Understanding the overcloud The overcloud is the resulting Red Hat OpenStack Platform (RHOSP) environment that the undercloud creates. The overcloud consists of multiple nodes with different roles that you define based on the OpenStack Platform environment that you want to create. The undercloud includes a default set of overcloud node roles: Controller Controller nodes provide administration, networking, and high availability for the OpenStack environment. A recommended OpenStack environment contains three Controller nodes together in a high availability cluster. A default Controller node role supports the following components. Not all of these services are enabled by default. Some of these components require custom or pre-packaged environment files to enable: OpenStack Dashboard (horizon) OpenStack Identity (keystone) OpenStack Compute (nova) API OpenStack Networking (neutron) OpenStack Image Service (glance) OpenStack Block Storage (cinder) OpenStack Object Storage (swift) OpenStack Orchestration (heat) OpenStack Telemetry Metrics (gnocchi) OpenStack Telemetry Alarming (aodh) OpenStack Telemetry Event Storage (panko) OpenStack Shared File Systems (manila) OpenStack Bare Metal (ironic) MariaDB Open vSwitch Pacemaker and Galera for high availability services. Compute Compute nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time. A default Compute node contains the following components: OpenStack Compute (nova) KVM/QEMU OpenStack Telemetry (ceilometer) agent Open vSwitch Storage Storage nodes provide storage for the OpenStack environment. The following list contains information about the various types of Storage node in RHOSP: Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). Additionally, director installs Ceph Monitor onto the Controller nodes in situations where you deploy Ceph Storage nodes as part of your environment. Block storage (cinder) - Used as external block storage for highly available Controller nodes. This node contains the following components: OpenStack Block Storage (cinder) volume OpenStack Telemetry agents Open vSwitch. Object storage (swift) - These nodes provide an external storage layer for OpenStack Swift. The Controller nodes access object storage nodes through the Swift proxy. Object storage nodes contain the following components: OpenStack Object Storage (swift) storage OpenStack Telemetry agents Open vSwitch. 1.3. Understanding high availability in Red Hat OpenStack Platform The Red Hat OpenStack Platform (RHOSP) director uses a Controller node cluster to provide highly available services to your OpenStack Platform environment. For each service, director installs the same components on all Controller nodes and manages the Controller nodes together as a single service. This type of cluster configuration provides a fallback in the event of operational failures on a single Controller node. This provides OpenStack users with a certain degree of continuous operation. The OpenStack Platform director uses some key pieces of software to manage components on the Controller node: Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster. HAProxy - Provides load balancing and proxy services to the cluster. Galera - Replicates the RHOSP database across the cluster. Memcached - Provides database caching. Note From version 13 and later, you can use director to deploy High Availability for Compute Instances (Instance HA). With Instance HA you can automate evacuating instances from a Compute node when the Compute node fails. 1.4. Understanding containerization in Red Hat OpenStack Platform Each OpenStack Platform service on the undercloud and overcloud runs inside an individual Linux container on their respective node. This containerization provides a method to isolate services, maintain the environment, and upgrade Red Hat OpenStack Platform (RHOSP). Red Hat OpenStack Platform 16.0 supports installation on the Red Hat Enterprise Linux 8.1 operating system. Red Hat Enterprise Linux 8.1 no longer includes Docker and provides a new set of tools to replace the Docker ecosystem. This means OpenStack Platform 16.0 replaces Docker with these new tools for OpenStack Platform deployment and upgrades. Podman Pod Manager (Podman) is a container management tool. It implements almost all Docker CLI commands, not including commands related to Docker Swarm. Podman manages pods, containers, and container images. One of the major differences between Podman and Docker is that Podman can manage resources without a daemon running in the background. For more information about Podman, see the Podman website . Buildah Buildah specializes in building Open Containers Initiative (OCI) images, which you use in conjunction with Podman. Buildah commands replicate the contents of a Dockerfile. Buildah also provides a lower-level coreutils interface to build container images, so that you do not require a Dockerfile to build containers. Buildah also uses other scripting languages to build container images without requiring a daemon. For more information about Buildah, see the Buildah website . Skopeo Skopeo provides operators with a method to inspect remote container images, which helps director collect data when it pulls images. Additional features include copying container images from one registry to another and deleting images from registries. Red Hat supports the following methods for managing container images for your overcloud: Pulling container images from the Red Hat Container Catalog to the image-serve registry on the undercloud and then pulling the images from the image-serve registry. When you pull images to the undercloud first, you avoid multiple overcloud nodes simultaneously pulling container images over an external connection. Pulling container images from your Satellite 6 server. You can pull these images directly from the Satellite because the network traffic is internal. This guide contains information about configuring your container image registry details and performing basic container operations. 1.5. Working with Ceph Storage in Red Hat OpenStack Platform It is common for large organizations that use Red Hat OpenStack Platform (RHOSP) to serve thousands of clients or more. Each OpenStack client is likely to have their own unique needs when consuming block storage resources. Deploying glance (images), cinder (volumes), and nova (Compute) on a single node can become impossible to manage in large deployments with thousands of clients. Scaling OpenStack externally resolves this challenge. However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat Ceph Storage so that you can scale the RHOSP storage layer from tens of terabytes to petabytes, or even exabytes of storage. Red Hat Ceph Storage provides this storage virtualization layer with high availability and high performance while running on commodity hardware. While virtualization might seem like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster, meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced performance. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage . Note For multi-architecture clouds, Red Hat supports only pre-installed or external Ceph implementation. For more information, see Integrating an Overcloud with an Existing Red Hat Ceph Cluster and Appendix B, Red Hat OpenStack Platform for POWER . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/chap-introduction |
20.4. OpenSSH Configuration Files | 20.4. OpenSSH Configuration Files OpenSSH has two different sets of configuration files: one for client programs ( ssh , scp , and sftp ) and one for the server daemon ( sshd ). System-wide SSH configuration information is stored in the /etc/ssh/ directory: moduli - Contains Diffie-Hellman groups used for the Diffie-Hellman key exchange which is critical for constructing a secure transport layer. When keys are exchanged at the beginning of an SSH session, a shared, secret value is created which cannot be determined by either party alone. This value is then used to provide host authentication. ssh_config - The system-wide default SSH client configuration file. It is overridden if one is also present in the user's home directory ( ~/.ssh/config ). sshd_config - The configuration file for the sshd daemon. ssh_host_dsa_key - The DSA private key used by the sshd daemon. ssh_host_dsa_key.pub - The DSA public key used by the sshd daemon. ssh_host_key - The RSA private key used by the sshd daemon for version 1 of the SSH protocol. ssh_host_key.pub - The RSA public key used by the sshd daemon for version 1 of the SSH protocol. ssh_host_rsa_key - The RSA private key used by the sshd daemon for version 2 of the SSH protocol. ssh_host_rsa_key.pub - The RSA public key used by the sshd for version 2 of the SSH protocol. User-specific SSH configuration information is stored in the user's home directory within the ~/.ssh/ directory: authorized_keys - This file holds a list of authorized public keys for servers. When the client connects to a server, the server authenticates the client by checking its signed public key stored within this file. id_dsa - Contains the DSA private key of the user. id_dsa.pub - The DSA public key of the user. id_rsa - The RSA private key used by ssh for version 2 of the SSH protocol. id_rsa.pub - The RSA public key used by ssh for version 2 of the SSH protocol identity - The RSA private key used by ssh for version 1 of the SSH protocol. identity.pub - The RSA public key used by ssh for version 1 of the SSH protocol. known_hosts - This file contains DSA host keys of SSH servers accessed by the user. This file is very important for ensuring that the SSH client is connecting the correct SSH server. Important If an SSH server's host key has changed, the client notifys the user that the connection cannot proceed until the server's host key is deleted from the known_hosts file using a text editor. Before doing this, however, contact the system administrator of the SSH server to verify the server is not compromised. Refer to the ssh_config and sshd_config man pages for information concerning the various directives available in the SSH configuration files. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ssh-configfiles |
function::ansi_cursor_show | function::ansi_cursor_show Name function::ansi_cursor_show - Shows the cursor. Synopsis Arguments None Description Sends ansi code for showing the cursor. | [
"ansi_cursor_show()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-cursor-show |
Chapter 2. Support Requirements | Chapter 2. Support Requirements This chapter outlines the requirements for creating a supported integration of Red Hat Gluster Storage and Red Hat Virtualization. 2.1. Prerequisites Integrating Red Hat Gluster Storage with Red Hat Virtualization has the following requirements: All installations of Red Hat Virtualization and Red Hat Gluster Storage must have valid subscriptions to Red Hat Network channels and Subscription Management repositories. Red Hat Virtualization installations must adhere to the requirements laid out in the Red Hat Virtualization Installation Guide : https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_cockpit_web_interface/index#RHV_requirements . Red Hat Gluster Storage installations must adhere to the requirements laid out in the Red Hat Gluster Storage Installation Guide : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/installation_guide/chap-planning_red_hat_storage_installation . Red Hat Gluster Storage installations must be completely up to date with the latest patches and upgrades. Refer to the Red Hat Gluster Storage 3.5 Installation Guide to upgrade to the latest version: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/installation_guide/ . The versions of Red Hat Virtualization and Red Hat Gluster Storage integrated must be compatible, according to the table in Section 2.2, "Compatible Versions" . A fully-qualified domain name must be set for each hypervisor and Red Hat Gluster Storage server node. Ensure that correct DNS records exist, and that the fully-qualified domain name is resolvable via both forward and reverse DNS lookup. Red Hat Gluster Storage volumes must either use three-way replication or arbitrated replication. This reduces the risk of split-brain condition developing in the cluster. The following volume types are supported: three-way replicated and distributed replicated volumes ( replica count 3 ) arbitrated replicated or distributed arbitrated replicated volumes ( replica 3 arbiter 1 ) Server-side quorum, client-side quorum, and sharding are all required for a supported configuration. These are enabled by default in the virt tuning profile covered in Chapter 4, Hosting Virtual Machine Images on Red Hat Gluster Storage volumes . See Preventing Split-brain for information about how quorum settings help prevent split brain. See Creating Sharded Volumes for information about why sharding reduces heal and geo-replication time. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-support_requirements |
Chapter 7. EAP Operator for Automating Application Deployment on OpenShift | Chapter 7. EAP Operator for Automating Application Deployment on OpenShift EAP operator is a JBoss EAP-specific controller that extends the OpenShift API. You can use the EAP operator to create, configure, manage, and seamlessly upgrade instances of complex stateful applications. The EAP operator manages multiple JBoss EAP Java application instances across the cluster. It also ensures safe transaction recovery in your application cluster by verifying all transactions are completed before scaling down the replicas and marking a pod as clean for termination. The EAP operator uses StatefulSet for the appropriate handling of Jakarta Enterprise Beans remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. You must install the EAP operator using OperatorHub, which can be used by OpenShift cluster administrators to discover, install, and upgrade operators. In OpenShift Container Platform 4, you can use the Operator Lifecycle Manager (OLM) to install, update, and manage the lifecycle of all operators and their associated services running across multiple clusters. The OLM runs by default in OpenShift Container Platform 4. It aids cluster administrators in installing, upgrading, and granting access to operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install operators, as well as grant specific projects access to use the catalog of operators available on the cluster. For more information about operators and the OLM, see the OpenShift documentation . 7.1. Installing EAP Operator Using the Web Console As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform web console. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. Here are a few points you must be aware of before installing the EAP operator using the web console: Installation Mode: Choose All namespaces on the cluster (default) to have the operator installed on all namespaces or choose individual namespaces, if available, to install the operator only on selected namespaces. Update Channel: If the EAP operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy: You can choose automatic or manual updates. If you choose automatic updates for the EAP operator, when a new version of the operator is available, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of EAP operator. If you choose manual updates, when a newer version of the operator is available, the OLM creates an update request. You must then manually approve the update request to have the operator updated to the new version. Note The following procedure might change in accordance with the modifications in the OpenShift Container Platform web console. For the latest and most accurate procedure, see the Installing from the OperatorHub using the web console section in the latest version of the Working with Operators in OpenShift Container Platform guide. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure In the OpenShift Container Platform web console, navigate to Operators -> OperatorHub . Scroll down or type EAP into the Filter by keyword box to find the EAP operator. Select JBoss EAP operator and click Install . On the Create Operator Subscription page: Select one of the following: All namespaces on the cluster (default) installs the operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster installs the operator in a specific, single namespace that you choose. The operator is made available for use only in this single namespace. Select an Update Channel . Select Automatic or Manual approval strategy, as described earlier. Click Subscribe to make the EAP operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a manual approval strategy, the subscription's upgrade status remains Upgrading until you review and approve its install plan. After you approve the install plan on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an automatic approval strategy, the upgrade status moves to Up to date without intervention. After the subscription's upgrade status is Up to date , select Operators Installed Operators to verify that the EAP ClusterServiceVersion (CSV) shows up and its Status changes to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status displayed is InstallSucceeded in the openshift-operators namespace. In other namespaces the status displayed is Copied . If the Status field does not change to InstallSucceeded , check the logs in any pod in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 7.2. Installing EAP Operator Using the CLI As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform CLI. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. When installing the EAP operator from the OperatorHub using the CLI, use the oc command to create a Subscription object. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc tool in your local system. Procedure View the list of operators available to the cluster from the OperatorHub: Create a Subscription object YAML file (for example, eap-operator-sub.yaml ) to subscribe a namespace to your EAP operator. The following is an example Subscription object YAML file: 1 Name of the operator to subscribe to. 2 The EAP operator is provided by the redhat-operators CatalogSource. For information about channels and approval strategy, see the web console version of this procedure. Create the Subscription object from the YAML file: The EAP operator is successfully installed. At this point, the OLM is aware of the EAP operator. A ClusterServiceVersion (CSV) for the operator appears in the target namespace, and APIs provided by the EAP operator is available for creation. 7.3. The eap-s2i-build template for creating application images Use the eap-s2i-build template to create your application images. The eap-s2i-build template adds several parameters to configure the location of the application source repository and the EAP S2I images to use to build your application. The APPLICATION_IMAGE parameter in the eap-s2i-build template specifies the name of the imagestream corresponding to the application image. For example, if you created an application image named my-app from the eap-s2i-build template, you can use the my-app:latest imagestreamtag from the my-app imagestream to deploy your application. For more information about the parameters used in the eap-s2i-build template, see Building an application image using eap-s2i-build template . With this template, the EAP operator can seamlessly upgrade your applications deployed on OpenShift. To enable seamless upgrades, you must configure a webhook in your GitHub repository and specify the webhook in the build configuration. The webhook notifies OpenShift when your repository is updated and a new build is triggered. You can use this template to build an application image using an imagestream for any JBoss EAP version, such as JBoss EAP 7.4, JBoss EAP XP, or JBoss EAP CD. Additional resources Building an application image using eap-s2i-build template . 7.4. Building an application image using eap-s2i-build template The eap-s2i-build template adds several parameters to configure the location of your application source repository and the EAP S2I images to use to build the application. With this template, you can use an imagestream for any JBoss EAP version, such as JBoss EAP 7.4, JBoss EAP XP, or JBoss EAP CD. Procedure Import EAP images in OpenShift. For more information, see Importing the OpenShift image streams and templates for JBoss EAP XP . Configure the imagestream to receive updates about the changes in the application imagestream and to trigger new builds. For more information, see Configuring periodic importing of imagestreamtags . Create the eap-s2i-build template for building the application image using EAP S2I images: This eap-s2i-build template creates two build configurations and two imagestreams corresponding to the intermediate build artifacts and the final application image. Process the eap-s2i-build template with parameters to create the resources for the final application image. The following example creates an application image, my-app : 1 The name for the application imagestream. The application image is tagged with the latest tag. 2 The imagestreamtag for EAP builder image. 3 The imagestreamtag for EAP runtime image. 4 The namespace in which the imagestreams for Red Hat Middleware images are installed. If omitted, the openshift namespace is used. Modify this only if you have installed the imagestreams in a namespace other than openshift . 5 The Git source URL of your application. 6 The Git branch or tag reference 7 The path within the Git repository that contains the application to build. Prepare the application image for deployment using the EAP operator. Configure the WildFlyServer resource: Apply the settings and let the EAP operator create a new WildFlyServer resource that references this application image: View the WildFlyServer resource with the following command: Additional resources For more information about importing an application imagestream, see Importing the latest OpenShift image streams and templates for JBoss EAP XP . For more information about periodic importing of imagestreams, see Configuring periodic importing of imagestreamtags . 7.5. Deploying a Java Application on OpenShift Using the EAP Operator The EAP operator helps automate Java application deployment on OpenShift. For information about the EAP operator APIs, see EAP Operator: API Information . Prerequisites You have installed EAP operator. For more information about installing the EAP operator, see Installing EAP Operator Using the Webconsole and Installing EAP Operator Using the CLI . You have built a Docker image of the user application using JBoss EAP for OpenShift Source-to-Image (S2I) build image. The APPLICATION_IMAGE parameter in your eap-s2i-build template has an imagestream, if you want to enable automatic upgrade of your application after it is deployed on OpenShift. For more information about building your application image using the eap-s2i-build template, see Building an application image using eap-s2i-build template . You have created a Secret object, if your application's CustomResourceDefinition (CRD) file references one. For more information about creating a new Secret object, see Creating a Secret . You have created a ConfigMap , if your application's CRD file references one. For information about creating a ConfigMap , see Creating a ConfigMap . You have created a ConfigMap from the standalone.xml file, if you choose to do so. For information about creating a ConfigMap from the standalone.xml file, see Creating a ConfigMap from a standalone.xml File . Note Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 7. Procedure Open your web browser and log on to OperatorHub. Select the Project or namespace you want to use for your Java application. Navigate to Installed Operator and select JBoss EAP operator . On the Overview tab, click the Create Instance link. Specify the application image details. The application image specifies the Docker image that contains the Java application. The image must be built using the JBoss EAP for OpenShift Source-to-Image (S2I) build image. If the applicationImage field corresponds to an imagestreamtag, any change to the image triggers an automatic upgrade of the application. You can provide any of the following references of the JBoss EAP for OpenShift application image: The name of the image: mycomp/myapp A tag: mycomp/myapp:1.0 A digest: mycomp/myapp:@sha256:0af38bc38be93116b6a1d86a9c78bd14cd527121970899d719baf78e5dc7bfd2 An imagestreamtag: my-app:latest Specify the size of the application. For example: Configure the application environment using the env spec . The environment variables can come directly from values, such as POSTGRESQL_SERVICE_HOST or from Secret objects, such as POSTGRESQL_USER. For example: Complete the following optional configurations that are relevant to your application deployment: Specify the storage requirements for the server data directory. For more information, see Configuring Persistent Storage for Applications . Specify the name of the Secret you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The Secret is mounted at /etc/secrets/<secret name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The Secret is mounted as a volume inside the pod. The following example demonstrates commands that you can use to find key values: Note Modifying a Secret object might lead to project inconsistencies. Instead of modifying an existing Secret object, Red Hat recommends creating a new object with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. Specify the name of the ConfigMap you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The ConfigMap is mounted at /etc/configmaps/<configmap name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The ConfigMap is mounted as a volume inside the pod. To find the key values: Note Modifying a ConfigMap might lead to project inconsistencies. Instead of modifying an existing ConfigMap , Red Hat recommends creating a new ConfigMap with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. If you choose to have your own standalone ConfigMap , provide the name of the ConfigMap as well as the key for the standalone.xml file: Note Creating a ConfigMap from the standalone.xml file is not supported in JBoss EAP 7. If you want to disable the default HTTP route creation in OpenShift, set disableHTTPRoute to true : 7.5.1. Creating a Secret If your application's CustomResourceDefinition (CRD) file references a Secret , you must create the Secret before deploying your application on OpenShift using the EAP operator. Procedure To create a Secret : 7.5.2. Creating a ConfigMap If your application's CustomResourceDefinition (CRD) file references a ConfigMap in the spec.ConfigMaps field, you must create the ConfigMap before deploying your application on OpenShift using the EAP operator. Procedure To create a configmap: 7.5.3. Creating a ConfigMap from a standalone.xml File You can create your own JBoss EAP standalone configuration instead of using the one in the application image that comes from JBoss EAP for OpenShift Source-to-Image (S2I). The standalone.xml file must be put in a ConfigMap that is accessible by the operator. Note NOTE: Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 7. Procedure To create a ConfigMap from the standalone.xml file: 7.5.4. Configuring Persistent Storage for Applications If your application requires persistent storage for some data, such as, transaction or messaging logs that must persist across pod restarts, configure the storage spec. If the storage spec is empty, an EmptyDir volume is used by each pod of the application. However, this volume does not persist after its corresponding pod is stopped. Procedure Specify volumeClaimTemplate to configure resources requirements to store the JBoss EAP standalone data directory. The name of the template is derived from the name of JBoss EAP. The corresponding volume is mounted in ReadWriteOnce access mode. The persistent volume that meets this storage requirement is mounted on the /eap/standalone/data directory. 7.6. Deploying the Red Hat Single Sign-On-enabled image by using EAP operator The EAP operator helps you to deploy an EAP application image with Red Hat Single Sign-On enabled on OpenShift. To deploy the application image, configure the environment variables and secrets listed in the table. Prerequisites You have installed the EAP operator. For more information about installing the EAP operator, see Installing EAP operator using the web console and Installing EAP operator using the CLI . You have built the EAP application image by using the eap74-sso-s2i template. For information about building the EAP application image, see Building an application image . Procedure Remove the DeploymentConfig file, created by the eap74-sso-s2i template, from the location where you have built the EAP application image. In the env field of the EAP operator's WildFlyServer resource, configure all the environment variables and secrets . Example configuration Note Ensure that all environment variables and secrets match the image configuration. The value of the parameter SSO_URL varies depending on the user of the OpenShift cluster. The EAP operator mounts the secrets in the /etc/secret directory, whereas the eap74-sso template mounts the secrets in the /etc directory. Save the EAP operator's WildFlyServer resource configuration. 7.7. Viewing metrics of an application using the EAP operator You can view the metrics of an application deployed on OpenShift using the EAP operator. When your cluster administrator enables metrics monitoring in your project, the EAP operator automatically displays the metrics on the OpenShift console. Prerequisites Your cluster administrator has enabled monitoring for your project. For more information, see Enabling monitoring for user-defined projects . Procedure In the OpenShift Container Platform web console, navigate to Monitoring -> Metrics . On the Metrics screen, type the name of your application in the text box to select your application. The metrics for your application appear on the screen. Note All metrics related to JBoss EAP application server are prefixed with jboss . For example, jboss_undertow_request_count_total . 7.8. Uninstalling EAP Operator Using Web Console To delete, or uninstall, EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the web console. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure From the Operators -> Installed Operators page, select JBoss EAP . On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. When prompted by the Remove Operator Subscription window, optionally select the Also completely remove the Operator from the selected namespace check box if you want all components related to the installation to be removed. This removes the CSV, which in turn removes the pods, deployments, custom resource definitions (CRDs), and custom resources (CRs) associated with the operator. Click Remove . The EAP operator stops running and no longer receives updates. 7.9. Uninstalling EAP Operator using the CLI To delete, or uninstall, the EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the command line. When using the command line, you uninstall the operator by deleting the subscription and CSV from the target namespace. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure Check the current version of the EAP operator subscription in the currentCSV field: Delete the EAP operator's subscription: Delete the CSV for the EAP operator in the target namespace using the currentCSV value from the step: 7.10. EAP Operator for Safe Transaction Recovery For certain types of transactions, EAP operator ensures data consistency before terminating your application cluster by verifying that all transactions are completed before scaling down the replicas and marking a pod as clean for termination. Note Some scenarios are not supported. For more information about the unsupported scenarios, see Unsupported Transaction Recovery Scenarios . This means that if you want to remove the deployment safely without data inconsistencies, you must first scale down the number of pods to 0, wait until all pods are terminated, and only then delete the wildflyserver instance. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. When the scaledown process begins the pod state ( oc get pod <pod_name> ) is still marked as Running , because the pod must complete all the unfinished transactions, including the remote enterprise beans calls that target it. If you want to monitor the state of the scaledown process, observe the status of the wildflyserver instance. For more information, see Monitoring the Scaledown Process . For information about pod statuses during scaledown, see Pod Status During Scaledown . 7.10.1. StatefulSets for Stable Network Host Names The EAP operator that manages the wildflyserver creates a StatefulSet as an underlying object managing the JBoss EAP pods. A StatefulSet is the workload API object that manages stateful applications. It manages the deployment and scaling of a set of pods, and provides guarantees about the ordering and uniqueness of these pods. The StatefulSet ensures that the pods in a cluster are named in a predefined order. It also ensures that pod termination follows the same order. For example, let us say, pod-1 has a transaction with heuristic outcome, and so is in the state of SCALING_DOWN_RECOVERY_DIRTY . Even if pod-0 is in the state of SCALING_DOWN_CLEAN , it is not terminated before pod-1. Until pod-1 is clean and is terminated, pod-0 remains in the SCALING_DOWN_CLEAN state. However, even if pod-0 is in the SCALING_DOWN_CLEAN state, it does not receive any new request and is practically idle. Note Decreasing the replica size of the StatefulSet or deleting the pod itself has no effect and such changes are reverted. 7.10.2. Monitoring the Scaledown Process If you want to monitor the state of the scaledown process, you must observe the status of the wildflyserver instance. For more information about the different pod statuses during scaledown, see Pod Status During Scaledown . Procedure To observe the state of the scaledown process: The WildFlyServer.Status.Scalingdown Pods and WildFlyServer.Status.Replicas fields shows the overall state of the active and non-active pods. The Scalingdown Pods field shows the number of pods which are about to be terminated when all the unfinished transactions are complete. The WildFlyServer.Status.Replicas field shows the current number of running pods. The WildFlyServer.Spec.Replicas field shows the number of pods in ACTIVE state. If there are no pods in scaledown process the numbers of pods in the WildFlyServer.Status.Replicas and WildFlyServer.Spec.Replicas fields are equal. 7.10.2.1. Pod Status During Scaledown The following table describes the different pod statuses during scaledown: Table 7.1. Pod Status Description Pod Status Description ACTIVE The pod is active and processing requests. SCALING_DOWN_RECOVERY_INVESTIGATION The pod is about to be scaled down. The scale-down process is under investigation about the state of transactions in JBoss EAP. SCALING_DOWN_RECOVERY_DIRTY JBoss EAP contains some incomplete transactions. The pod cannot be terminated until they are cleaned. The transaction recovery process is periodically run at JBoss EAP and it waits until the transactions are completed SCALING_DOWN_CLEAN The pod is processed by transaction scaled down processing and is marked as clean to be removed from the cluster. 7.10.3. Scaling Down During Transactions with Heuristic Outcomes When the outcome of a transaction is unknown, automatic transaction recovery is impossible. You must then manually recover your transactions. Prerequisites The status of your pod is stuck at SCALING_DOWN_RECOVERY_DIRTY . Procedure Access your JBoss EAP instance using CLI. Resolve all the heuristics transaction records in the transaction object store. For more information, see Recovering Heuristic Outcomes in the Managing Transactions on JBoss EAP . Remove all records from the enterprise bean client recovery folder. Remove all files from the pod enterprise bean client recovery directory: The status of your pod changes to SCALING_DOWN_CLEAN and the pod is terminated. 7.10.4. Configuring the transactions subsystem to use the JDBC storage for transaction log In cases where the system does not provide a file system to store transaction logs , use the JBoss EAP S2I image to configure the JDBC object store. Important S2I environment variables are not usable when JBoss EAP is deployed as a bootable JAR. In this case, you must create a Galleon layer or configure a CLI script to make the necessary configuration changes. The JDBC object store can be set up with the environment variable TX_DATABASE_PREFIX_MAPPING . This variable has the same structure as DB_SERVICE_PREFIX_MAPPING . Prerequisite You have created a datasource based on the value of the environment variables. You have ensured consistent data reads and writes permissions exist between the database and the transaction manager communicating over the JDBC object store. For more information see configuring JDBC data sources Procedure Set up and configure the JDBC object store through the S2I environment variable. Example Verification You can verify both the datasource configuration and transaction subsystem configuration by checking the standalone-openshift.xml configuration file oc rsh <podname> cat /opt/eap/standalone/configuration/standalone-openshift.xml . Expected output: Additional resources For more information about creating datasources by using either the management console or the management CLI, see Creating Datasources in the JBoss EAP Configuration Guide . 7.11. Automatically scaling pods with the horizontal pod autoscaler HPA With EAP operator, you can use a horizontal pod autoscaler HPA to automatically increase or decrease the scale of an EAP application based on metrics collected from the pods that belong to that EAP application. Note Using HPA ensures that transaction recovery is still handled when a pod is scaled down. Procedure Configure the resources: Important You must specify the resource limits and requests for containers in a pod for autoscaling to work as expected. Create the Horizontal pod autoscaler: Verification You can verify the HPA behavior by checking the replicas. The number of replicas increase or decrease depending on the increase or decrease of the workload. Additional resources https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/nodes/index#nodes-pods-autoscaling 7.12. Jakarta Enterprise Beans Remoting on OpenShift For JBoss EAP to work correctly with enterprise bean remoting calls between different JBoss EAP clusters on OpenShift, you must understand the enterprise bean remoting configuration options on OpenShift. Note When deploying on OpenShift, consider the use of the EAP operator. The EAP operator uses StatefulSet for the appropriate handling of enterprise bean remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. Network hostname stability is required when the JBoss EAP instance is contacted using an enterprise bean remote call with transaction propagation. The JBoss EAP instance must be reachable under the same hostname even if the pod restarts. The transaction manager, which is a stateful component, binds the persisted transaction data to a particular JBoss EAP instance. Because the transaction log is bound to a specific JBoss EAP instance, it must be completed in the same instance. To prevent data loss when the JDBC transaction log store is used, make sure your database provides data-consistent reads and writes. Consistent data reads and writes are important when the database is scaled horizontally with multiple instances. An enterprise bean remote caller has two options to configure the remote calls: Define a remote outbound connection. For more information, see Configuring a Remote Outbound Connection . Use a programmatic JNDI lookup for the bean at the remote server. For more information, see Using Remote Jakarta Enterprise Beans Clients . You must reconfigure the value representing the address of the target node depending on the enterprise bean remote call configuration method. Note The name of the target enterprise bean for the remote call must be the DNS address of the first pod. The StatefulSet behaviour depends on the ordering of the pods. The pods are named in a predefined order. For example, if you scale your application to three replicas, your pods have names such as eap-server-0 , eap-server-1 , and eap-server-2 . The EAP operator also uses a headless service that ensures a specific DNS hostname is assigned to the pod. If the application uses the EAP operator, a headless service is created with a name such as eap-server-headless . In this case, the DNS name of the first pod is eap-server-0.eap-server-headless . The use of the hostname eap-server-0.eap-server-headless ensures that the enterprise bean call reaches any EAP instance connected to the cluster. A bootstrap connection is used to initialize the Jakarta Enterprise Beans client, which gathers the structure of the EAP cluster as the step. 7.12.1. Configuring Jakarta Enterprise Beans on OpenShift You must configure the JBoss EAP servers that act as callers for enterprise bean remoting. The target server must configure a user with permission to receive the enterprise bean remote calls. Prerequisites You have used the EAP operator and the supported JBoss EAP for OpenShift S2I image for deploying and managing the JBoss EAP application instances on OpenShift. The clustering is set correctly. For more information about JBoss EAP clustering, see the Clustering section. Procedure Create a user in the target server with permission to receive the enterprise bean remote calls: Configure the caller JBoss EAP application server. Create the eap-config.xml file in USDJBOSS_HOME/standalone/configuration using the custom configuration functionality. For more information, see Custom Configuration . Configure the caller JBoss EAP application server with the wildfly.config.url property: Note If you use the following example for your configuration, replace the >>PASTE_... _HERE<< with username and password you configured. Example Configuration <configuration> <authentication-client xmlns="urn:elytron:1.0"> <authentication-rules> <rule use-configuration="jta"> <match-abstract-type name="jta" authority="jboss" /> </rule> </authentication-rules> <authentication-configurations> <configuration name="jta"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="PASTE_USER_NAME_HERE" /> <credentials> <clear-password password="PASTE_PASSWORD_HERE" /> </credentials> <set-mechanism-realm name="ApplicationRealm" /> </configuration> </authentication-configurations> </authentication-client> </configuration> | [
"oc get packagemanifests -n openshift-marketplace | grep eap NAME CATALOG AGE eap Red Hat Operators 43d",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: eap namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: eap 1 source: redhat-operators 2 sourceNamespace: openshift-marketplace",
"oc apply -f eap-operator-sub.yaml oc get csv -n openshift-operators NAME DISPLAY VERSION REPLACES PHASE eap-operator.v1.0.0 JBoss EAP 1.0.0 Succeeded",
"oc replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/master/eap-s2i-build.yaml",
"oc process eap-s2i-build -p APPLICATION_IMAGE=my-app \\ 1 -p EAP_IMAGE=jboss-eap-xp1-openjdk11-openshift:1.0 \\ 2 -p EAP_RUNTIME_IMAGE=jboss-eap-xp1-openjdk11-runtime-openshift:1.0 \\ 3 -p EAP_IMAGESTREAM_NAMESPACE=USD(oc project -q) \\ 4 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts.git \\ 5 -p SOURCE_REPOSITORY_REF=xp-1.0.x \\ 6 -p CONTEXT_DIR=microprofile-config | oc create -f - 7",
"cat > my-app.yaml<<EOF apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: my-app spec: applicationImage: 'my-app:latest' replicas: 1 EOF",
"oc apply -f my-app.yaml",
"oc get wfly my-app",
"spec: replicas:2",
"spec: env: - name: POSTGRESQL_SERVICE_HOST value: postgresql - name: POSTGRESQL_SERVICE_PORT value: '5432' - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: postgresql - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: postgresql - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: postgresql",
"spec: secrets: - my-secret",
"ls /etc/secrets/my-secret/ my-key my-password cat /etc/secrets/my-secret/my-key devuser cat /etc/secrets/my-secret/my-password my-very-secure-pasword",
"spec: configMaps: - my-config",
"ls /etc/configmaps/my-config/ key1 key2 cat /etc/configmaps/my-config/key1 value1 cat /etc/configmaps/my-config/key2 value2",
"standaloneConfigMap: name: clusterbench-config-map key: standalone-openshift.xml",
"spec: disableHTTPRoute: true",
"oc create secret generic my-secret --from-literal=my-key=devuser --from-literal=my-password='my-very-secure-pasword'",
"oc create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2 configmap/my-config created",
"oc create configmap clusterbench-config-map --from-file examples/clustering/config/standalone-openshift.xml configmap/clusterbench-config-map created",
"spec: storage: volumeClaimTemplate: spec: resources: requests: storage: 3Gi",
"cat > my-app.yaml<<EOF apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: my-app spec: applicationImage: 'my-app:latest' replicas: 1 env: - name: SSO_URL value: https://secure-sso-sso-app-demo.openshift32.example.com/auth - name: SSO_REALM value: eap-demo - name: SSO_PUBLIC_KEY value: realm-public-key - name: SSO_USERNAME value: mySsoUser - name: SSO_PASSWORD value: 6fedmL3P - name: SSO_SAML_KEYSTORE value: /etc/secret/sso-app-secret/keystore.jks - name: SSO_SAML_KEYSTORE_PASSWORD value: mykeystorepass - name: SSO_SAML_CERTIFICATE_NAME value: jboss - name: SSO_BEARER_ONLY value: true - name: SSO_CLIENT value: module-name - name: SSO_ENABLE_CORS value: true - name: SSO_SECRET value: KZ1QyIq4 - name: SSO_DISABLE_SSL_CERTIFICATE_VALIDATION value: true - name: SSO_SAML_KEYSTORE_SECRET value: sso-app-secret - name: HTTPS_SECRET value: eap-ssl-secret - name: SSO_TRUSTSTORE_SECRET value: sso-app-secret EOF",
"oc get subscription eap-operator -n openshift-operators -o yaml | grep currentCSV currentCSV: eap-operator.v1.0.0",
"oc delete subscription eap-operator -n openshift-operators subscription.operators.coreos.com \"eap-operator\" deleted",
"oc delete clusterserviceversion eap-operator.v1.0.0 -n openshift-operators clusterserviceversion.operators.coreos.com \"eap-operator.v1.0.0\" deleted",
"describe wildflyserver <name>",
"USDJBOSS_HOME/standalone/data/ejb-xa-recovery exec <podname> rm -rf USDJBOSS_HOME/standalone/data/ejb-xa-recovery",
"Narayana JDBC objectstore configuration via s2i env variables - name: TX_DATABASE_PREFIX_MAPPING value: 'PostgresJdbcObjectStore-postgresql=PG_OBJECTSTORE' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_HOST value: 'postgresql' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_PORT value: '5432' - name: PG_OBJECTSTORE_JNDI value: 'java:jboss/datasources/PostgresJdbc' - name: PG_OBJECTSTORE_DRIVER value: 'postgresql' - name: PG_OBJECTSTORE_DATABASE value: 'sampledb' - name: PG_OBJECTSTORE_USERNAME value: 'admin' - name: PG_OBJECTSTORE_PASSWORD value: 'admin'",
"<datasource jta=\"false\" jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\" pool-name=\"postgresjdbcobjectstore_postgresqlObjectStorePool\" enabled=\"true\" use-java-context=\"true\" statistics-enabled=\"USD{wildfly.datasources.statistics-enabled:USD{wildfly.statistics-enabled:false}}\"> <connection-url>jdbc:postgresql://postgresql:5432/sampledb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> </datasource> <!-- under subsystem urn:jboss:domain:transactions --> <jdbc-store datasource-jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\"> <!-- the pod name was named transactions-xa-0 --> <action table-prefix=\"ostransactionsxa0\"/> <communication table-prefix=\"ostransactionsxa0\"/> <state table-prefix=\"ostransactionsxa0\"/> </jdbc-store>",
"apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: eap-helloworld spec: applicationImage: 'eap-helloworld:latest' replicas: 1 resources: limits: cpu: 500m memory: 2Gi requests: cpu: 100m memory: 1Gi",
"autoscale wildflyserver/eap-helloworld --cpu-percent=50 --min=1 --max=10",
"get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 1 4s eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 4 17s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 8 32s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 47s eap-helloworld WildFlyServer/eap-helloworld 139%/50% 1 10 10 62s eap-helloworld WildFlyServer/eap-helloworld 180%/50% 1 10 10 92s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 2m2s",
"USDJBOSS_HOME/bin/add-user.sh",
"JAVA_OPTS_APPEND=\"-Dwildfly.config.url=USDJBOSS_HOME/standalone/configuration/eap-config.xml\"",
"<configuration> <authentication-client xmlns=\"urn:elytron:1.0\"> <authentication-rules> <rule use-configuration=\"jta\"> <match-abstract-type name=\"jta\" authority=\"jboss\" /> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"jta\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"PASTE_USER_NAME_HERE\" /> <credentials> <clear-password password=\"PASTE_PASSWORD_HERE\" /> </credentials> <set-mechanism-realm name=\"ApplicationRealm\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_online/eap-operator-for-automating-application-deployment-on-openshift_default |
Chapter 7. Deployments | Chapter 7. Deployments 7.1. Understanding deployments The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template. Deployment objects involve one or more replica sets , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers , which preceded replica sets. One or more pods, which represent an instance of a particular version of an application. Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 7.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 7.1.1.1. Replica sets A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 7.1.1.2. Replication controllers Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. Note Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly. If you require custom orchestration or do not require updates, use replica sets instead of replication controllers. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 7.1.2. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 7.1.3. DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 7.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 7.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 7.1.4.2. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes. 7.1.4.3. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies. 7.2. Managing deployment processes 7.2.1. Managing DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 7.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 7.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 7.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 7.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 7.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar # ... 7.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 7.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 7.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 7.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 7.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 7.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. Navigate to Workloads Secrets . Create a secret that contains credentials for accessing a private image repository. Navigate to Workloads DeploymentConfigs . Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 7.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod metadata: name: my-pod # ... spec: nodeSelector: disktype: ssd # ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 7.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc # ... spec: # ... securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 7.3. Using deployment strategies Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change. Because users generally access applications through a route handled by a router, deployment strategies can focus on DeploymentConfig object features or routing features. Strategies that focus on DeploymentConfig object features impact all routes that use the application. Strategies that use router features target individual routes. Most deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. 7.3.1. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 7.3.2. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 7.3.2.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 7.3.2.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest Note This image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the oc expose dc/deployment-example --port=<port> command after completing this procedure. If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 7.3.2.3. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 7.3.2.4. Starting a rolling deployment using the Developer perspective You can upgrade an application by starting a rolling deployment. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure In the Topology view, click the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 7.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 7.3.3. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 7.3.3.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 7.3.3.2. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 7.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 7.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 7.3.4.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 7.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 7.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 7.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. Alternatively, you can use an A/B versions strategy in which both versions are active at the same time. With this strategy, some users can use version A , and other users can use version B . You can use this strategy to experiment with user interface changes or other features in order to get user feedback. You can also use it to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 7.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 7.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 7.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 7.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 7.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 7.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 7.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Note When using alternateBackends , also use the roundrobin load balancing strategy to ensure requests are distributed as expected to the services based on weight. roundrobin can be set for a route by using a route annotation. See the Additional resources section for more information about route annotations. Setting the oc set route-backend to 0 means the service does not participate in load balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin # ... spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 # ... 7.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 7.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 7.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To override the default values for the load balancing algorithm, adjust the annotation on the route by setting the algorithm to roundrobin . For a route on OpenShift Container Platform, the default load balancing algorithm is set to random or source values. To set the algorithm to roundrobin , run the command: USD oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin For Transport Layer Security (TLS) passthrough routes, the default value is source . For all other routes, the default is random . To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 7.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b 7.4.6. Additional resources Route-specific annotations . | [
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/deployments |
Chapter 13. Filtering Messages | Chapter 13. Filtering Messages AMQ Broker provides a powerful filter language based on a subset of the SQL 92 expression syntax. The filter language uses the same syntax as used for JMS selectors, but the predefined identifiers are different. The table below lists the identifiers that apply to a AMQ Broker message. Identifier Attribute AMQPriority The priority of a message. Message priorities are integers with valid values from 0 through 9 . 0 is the lowest priority and 9 is the highest. AMQExpiration The expiration time of a message. The value is a long integer. AMQDurable Whether a message is durable or not. The value is a string. Valid values are DURABLE or NON_DURABLE . AMQTimestamp The timestamp of when the message was created. The value is a long integer. AMQSize The value of the encodeSize property of the message. The value of encodeSize is the space, in bytes, that the message takes up in the journal. Because the broker uses a double-byte character set to encode messages, the actual size of the message is half the value of encodeSize . Any other identifiers used in core filter expressions are assumed to be properties of the message. For documentation on selector syntax for JMS Messages, see the Java EE API . 13.1. Configuring a Queue to Use a Filter You can add a filter to the queues you configure in <broker_instance_dir> /etc/broker.xml . Only messages that match the filter expression enter the queue. Procedure Add the filter element to the desired queue and include the filter you want to apply as the value of the element. In the example below, the filter NEWS='technology' is added to the queue technologyQueue . <configuration> <core> ... <addresses> <address name="myQueue"> <anycast> <queue name="myQueue"> <filter string="NEWS='technology'"/> </queue> </anycast> </address> </addresses> </core> </configuration> 13.2. Filtering JMS Message Properties The JMS specification states that a String property must not be converted to a numeric type when used in a selector. For example, if a message has the age property set to the String value 21 , the selector age > 18 must not match it. This restriction limits STOMP clients because they can send only messages with String properties. Configuring a Filter to Convert a String to a Number To convert String properties to a numeric type, add the prefix convert_string_expressions: to the value of the filter . Procedure Edit <broker_instance_dir> /etc/broker.xml by applying the prefix convert_string_expressions: to the desired filter . The example below edits the filter value from age > 18 to convert_string_expressions:age > 18 . <configuration> <core> ... <addresses> <address name="myQueue"> <anycast> <queue name="myQueue"> <filter string="convert_string_expressions='age > 18'"/> </queue> </anycast> </address> </addresses> </core> </configuration> 13.3. Filtering AMQP Messages Based on Properties on Annotations Before the broker moves an expired or undelivered AMQP message to an expiry or dead letter queue that you have configured, the broker applies annotations and properties to the message. A client can create a filter based on the properties or annotations, to select particular messages to consume from the expiry or dead letter queue. Note The properties that the broker applies are internal properties These properties are are not exposed to clients for regular use, but can be specified by a client in a filter. Shown below are examples of filters based on message properties and annotations. Filtering based on properties is the recommended approach, when possible, because this approach requires less processing by the broker. Filter based on message properties ConnectionFactory factory = new JmsConnectionFactory("amqp://localhost:5672"); Connection connection = factory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); connection.start(); javax.jms.Queue queue = session.createQueue("my_DLQ"); MessageConsumer consumer = session.createConsumer(queue, "_AMQ_ORIG_ADDRESS='original_address_name'"); Message message = consumer.receive(); Filter based on message annotations ConnectionFactory factory = new JmsConnectionFactory("amqp://localhost:5672"); Connection connection = factory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); connection.start(); javax.jms.Queue queue = session.createQueue("my_DLQ"); MessageConsumer consumer = session.createConsumer(queue, "\"m.x-opt-ORIG-ADDRESS\"='original_address_name'"); Message message = consumer.receive(); Note When consuming AMQP messages based on an annotation, the client must include append a m. prefix to the message annotation, as shown in the preceding example. Additional resources For more information about the annotations and properties that the broker applies to expired or undelivered AMQP messages, see Section 4.14, "Annotations and properties on expired or undelivered AMQP messages" . 13.4. Filtering XML Messages AMQ Broker provides a way of filtering Text messages that contain an XML body using XPath. XPath (XML Path Language) is a query language for selecting nodes from an XML document. Note Only text based messages are supported. Filtering large messages is not supported. To filter text based messages, you need to create a Message Selector of the form XPATH '<xpath-expression> . An example of a Message Body <root> <a key='first' num='1'/> <b key='second' num='2'>b</b> </root> Filter based on an XPath query Warning Since XPath applies to the body of the message and requires parsing of XML, filtering can be significantly slower than normal filters. XPath filters are supported with and between producers and consumers using the following protocols: OpenWire JMS Core (and Core JMS) STOMP AMQP Configuring the XML Parser By default the XML Parser used by the Broker is the Platform default DocumentBuilderFactory instance used by the JDK. The XML parser used for XPath default configuration includes the following settings: http://xml.org/sax/features/external-general-entities : false http://xml.org/sax/features/external-parameter-entities : false http://apache.org/xml/features/disallow-doctype-decl : true However, in order to deal with any implementation-specific issues the features can be customized by configuring System properties in the artemis.profile configuration file. org.apache.activemq.documentBuilderFactory.feature:prefix Example feature configuration | [
"<configuration> <core> <addresses> <address name=\"myQueue\"> <anycast> <queue name=\"myQueue\"> <filter string=\"NEWS='technology'\"/> </queue> </anycast> </address> </addresses> </core> </configuration>",
"<configuration> <core> <addresses> <address name=\"myQueue\"> <anycast> <queue name=\"myQueue\"> <filter string=\"convert_string_expressions='age > 18'\"/> </queue> </anycast> </address> </addresses> </core> </configuration>",
"ConnectionFactory factory = new JmsConnectionFactory(\"amqp://localhost:5672\"); Connection connection = factory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); connection.start(); javax.jms.Queue queue = session.createQueue(\"my_DLQ\"); MessageConsumer consumer = session.createConsumer(queue, \"_AMQ_ORIG_ADDRESS='original_address_name'\"); Message message = consumer.receive();",
"ConnectionFactory factory = new JmsConnectionFactory(\"amqp://localhost:5672\"); Connection connection = factory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); connection.start(); javax.jms.Queue queue = session.createQueue(\"my_DLQ\"); MessageConsumer consumer = session.createConsumer(queue, \"\\\"m.x-opt-ORIG-ADDRESS\\\"='original_address_name'\"); Message message = consumer.receive();",
"<root> <a key='first' num='1'/> <b key='second' num='2'>b</b> </root>",
"PATH 'root/a'",
"-Dorg.apache.activemq.documentBuilderFactory.feature:http://xml.org/sax/features/external-general-entities=true"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/filters |
Chapter 3. Configuring the Ceph Object Gateway | Chapter 3. Configuring the Ceph Object Gateway As a storage administrator, you must configure the Ceph Object Gateway to accept authentication requests from the Keystone service. 3.1. Prerequisites A running Red Hat OpenStack Platform 13, 15, or 16 environment. A running Red Hat Ceph Storage environment. A running Ceph Object Gateway environment. 3.2. Configuring the Ceph Object Gateway to use Keystone SSL Converting the OpenSSL certificates that Keystone uses configures the Ceph Object Gateway to work with Keystone. When the Ceph Object Gateway interacts with OpenStack's Keystone authentication, Keystone will terminate with a self-signed SSL certificate. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Procedure Convert the OpenSSL certificate to the nss db format: Example Install Keystone's SSL certificate in the node running the Ceph Object Gateway. Alternatively set the value of the configurable rgw_keystone_verify_ssl setting to false . Setting rgw_keystone_verify_ssl to false means that the gateway won't attempt to verify the certificate. 3.3. Configuring the Ceph Object Gateway to use Keystone authentication Configure the Red Hat Ceph Storage to use OpenStack's Keystone authentication. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. admin privileges to the production environment. Procedure Edit the Ceph configuration file on the admin node. Navigate to the [client.radosgw. INSTANCE_NAME ] , where INSTANCE_NAME is the name of the Gateway instance to configure. Do the following for each gateway instance: Set the rgw_s3_auth_use_keystone setting to true . Set the nss_db_path setting to the path where the NSS database is stored. Provide authentication credentials: It is possible to configure a Keystone service tenant, user and password for keystone for v2.0 version of the OpenStack Identity API, similar to the way system administrators tend to configure OpenStack services. Providing a username and password avoids providing the shared secret to the rgw_keystone_admin_token setting. Important Red Hat recommends disabling authentication by admin token in production environments. The service tenant credentials should have admin privileges. The necessary configuration options are: A Ceph Object Gateway user is mapped into a Keystone tenant . A Keystone user has different roles assigned to it on possibly more than a single tenant. When the Ceph Object Gateway gets the ticket, it looks at the tenant, and the user roles that are assigned to that ticket, and accepts or rejects the request according to the rgw_keystone_accepted_roles configurable. A typical configuration might have the following settings: Example Additional Resources Users and Identity Management Guide for Red Hat OpenStack Platform 13. Users and Identity Management Guide for Red Hat OpenStack Platform 15. Users and Identity Management Guide for Red Hat OpenStack Platform 16. 3.4. Restarting the Ceph Object Gateway daemon Restarting the Ceph Object Gateway must be done to active configuration changes. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. admin privileges to the production environment. Procedure Once you have saved the Ceph configuration file and distributed it to each Ceph node, restart the Ceph Object Gateway instances: | [
"mkdir /var/ceph/nss mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | certutil -d /var/ceph/nss -A -n ca -t \"TCu,Cu,Tuw\" mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | certutil -A -d /var/ceph/nss -n signing_cert -t \"P,P,P\"",
"rgw_keystone_admin_user = KEYSTONE_TENANT_USER_NAME rgw_keystone_admin_password = KEYSTONE_TENANT_USER_PASSWORD rgw_keystone_admin_tenant = KEYSTONE_TENANT_NAME",
"[client.radosgw.gateway] rgw_keystone_url = {keystone server url:keystone server admin port} ##Authentication using an admin token. Not preferred. #rgw_keystone_admin_token = {keystone admin token} ##Authentication using username, password and tenant. Preferred. rgw_keystone_admin_user = _KEYSTONE_TENANT_USER_NAME_ rgw_keystone_admin_password = _KEYSTONE_TENANT_USER_PASSWORD_ rgw_keystone_admin_tenant = _KEYSTONE_TENANT_NAME_ rgw_keystone_accepted_roles = _KEYSTONE_ACCEPTED_USER_ROLES_ ## rgw_keystone_token_cache_size = _NUMBER_OF_TOKENS_TO_CACHE_ rgw_keystone_revocation_interval = _NUMBER_OF_SECONDS_BEFORE_CHECKING_REVOKED_TICKETS_ rgw_keystone_make_new_tenants = _TRUE_FOR_PRIVATE_TENANT_FOR_EACH_NEW_USER_ rgw_s3_auth_use_keystone = true nss_db_path = _PATH_TO_NSS_DB_",
"systemctl restart ceph-radosgw systemctl restart ceph-radosgw@rgw.`hostname -s`"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/using_keystone_with_the_ceph_object_gateway_guide/configuring-the-ceph-object-gateway |
Chapter 166. IRC Component | Chapter 166. IRC Component Available as of Camel version 1.1 The irc component implements an IRC (Internet Relay Chat) transport. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-irc</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 166.1. URI format irc:nick@host[:port]/#room[?options] irc:nick@host[:port]?channels=#channel1,#channel2,#channel3[?options] You can append query options to the URI in the following format, ?option=value&option=value&... 166.2. Options The IRC component supports 2 options, which are listed below. Name Description Default Type useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The IRC endpoint is configured using URI syntax: with the following path and query parameters: 166.2.1. Path Parameters (2 parameters): Name Description Default Type hostname Required Hostname for the IRC chat server String port Port number for the IRC chat server. If no port is configured then a default port of either 6667, 6668 or 6669 is used. int 166.2.2. Query Parameters (25 parameters): Name Description Default Type autoRejoin (common) Whether to auto re-join when being kicked true boolean commandTimeout (common) Delay in milliseconds before sending commands after the connection is established. 5000 long namesOnJoin (common) Sends NAMES command to channel after joining it. onReply has to be true in order to process the result which will have the header value irc.num = '353'. false boolean nickname (common) The nickname used in chat. String persistent (common) Deprecated Use persistent messages. true boolean realname (common) The IRC user's actual name. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern colors (advanced) Whether or not the server supports color codes. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean onJoin (filter) Handle user join events. true boolean onKick (filter) Handle kick events. true boolean onMode (filter) Handle mode change events. true boolean onNick (filter) Handle nickname change events. true boolean onPart (filter) Handle user part events. true boolean onPrivmsg (filter) Handle private message events. true boolean onQuit (filter) Handle user quit events. true boolean onReply (filter) Whether or not to handle general responses to commands or informational messages. false boolean onTopic (filter) Handle topic change events. true boolean nickPassword (security) Your IRC server nickname password. String password (security) The IRC server password. String sslContextParameters (security) Used for configuring security using SSL. Reference to a org.apache.camel.util.jsse.SSLContextParameters in the Registry. This reference overrides any configured SSLContextParameters at the component level. Note that this setting overrides the trustManager option. SSLContextParameters trustManager (security) The trust manager used to verify the SSL server's certificate. SSLTrustManager username (security) The IRC server user name. String 166.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.irc.enabled Enable irc component true Boolean camel.component.irc.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.irc.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 166.4. SSL Support 166.4.1. Using the JSSE Configuration Utility As of Camel 2.9, the IRC component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the IRC component. Programmatic configuration of the endpoint KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/truststore.jks"); ksp.setPassword("keystorePassword"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setTrustManagers(tmp); Registry registry = ... registry.bind("sslContextParameters", scp); ... from(...) .to("ircs://camel-prd-user@server:6669/#camel-test?nickname=camel-prd&password=password&sslContextParameters=#sslContextParameters"); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:trustManagers> <camel:keyStore resource="/users/home/server/truststore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="ircs://camel-prd-user@server:6669/#camel-test?nickname=camel-prd&password=password&sslContextParameters=#sslContextParameters"/>... 166.4.2. Using the legacy basic configuration options You can also connect to an SSL enabled IRC server, as follows: ircs:host[:port]/#room?username=user&password=pass By default, the IRC transport uses SSLDefaultTrustManager . If you need to provide your own custom trust manager, use the trustManager parameter as follows: ircs:host[:port]/#room?username=user&password=pass&trustManager=#referenceToMyTrustManagerBean 166.5. Using keys Available as of Camel 2.2 Some irc rooms requires you to provide a key to be able to join that channel. The key is just a secret word. For example we join 3 channels where as only channel 1 and 3 uses a key. irc:[email protected]?channels=#chan1,#chan2,#chan3&keys=chan1Key,,chan3key 166.6. Getting a list of users of the channel Using the namesOnJoin option one can invoke the IRC- NAMES command after the component has joined a channel. The server will reply with irc.num = 353 . So in order to process the result the property onReply has to be true . Furthermore one has to filter the onReply exchanges in order to get the names. For example we want to get all exchanges that contain the usernames of the channel: from("ircs:nick@myserver:1234/#mychannelname?namesOnJoin=true&onReply=true") .choice() .when(header("irc.messageType").isEqualToIgnoreCase("REPLY")) .filter(header("irc.num").isEqualTo("353")) .to("mock:result").stop(); 166.7. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-irc</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"irc:nick@host[:port]/#room[?options] irc:nick@host[:port]?channels=#channel1,#channel2,#channel3[?options]",
"irc:hostname:port",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/truststore.jks\"); ksp.setPassword(\"keystorePassword\"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setTrustManagers(tmp); Registry registry = registry.bind(\"sslContextParameters\", scp); from(...) .to(\"ircs://camel-prd-user@server:6669/#camel-test?nickname=camel-prd&password=password&sslContextParameters=#sslContextParameters\");",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:trustManagers> <camel:keyStore resource=\"/users/home/server/truststore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"ircs://camel-prd-user@server:6669/#camel-test?nickname=camel-prd&password=password&sslContextParameters=#sslContextParameters\"/>",
"ircs:host[:port]/#room?username=user&password=pass",
"ircs:host[:port]/#room?username=user&password=pass&trustManager=#referenceToMyTrustManagerBean",
"irc:[email protected]?channels=#chan1,#chan2,#chan3&keys=chan1Key,,chan3key",
"from(\"ircs:nick@myserver:1234/#mychannelname?namesOnJoin=true&onReply=true\") .choice() .when(header(\"irc.messageType\").isEqualToIgnoreCase(\"REPLY\")) .filter(header(\"irc.num\").isEqualTo(\"353\")) .to(\"mock:result\").stop();"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/irc-component |
Chapter 2. Upgrading your broker | Chapter 2. Upgrading your broker 2.1. About upgrades Red Hat releases new versions of AMQ Broker to the Customer Portal . Update your brokers to the newest version to ensure that you have the latest enhancements and fixes. In general, Red Hat releases a new version of AMQ Broker in one of three ways: Major Release A major upgrade or migration is required when an application is transitioned from one major release to the , for example, from AMQ Broker 6 to AMQ Broker 7. This type of upgrade is not addressed in this guide. For instructions on how to upgrade from releases of AMQ Broker, see Migrating to Red Hat AMQ 7 . Minor Release AMQ Broker periodically provides minor releases, which are updates that include new features, as well as bug and security fixes. If you plan to upgrade from one AMQ Broker minor release to another, for example, from AMQ Broker 7.0 to AMQ Broker 7.1, code changes should not be required for applications that do not use private, unsupported, or tech preview components. Micro Release AMQ Broker also periodically provides micro releases that contain minor enhancements and fixes. Micro releases increment the minor release version by the last digit, for example from 7.0.1 to 7.0.2. A micro release should not require code changes, however, some releases may require configuration changes. 2.2. Upgrading older 7.x versions 2.2.1. Upgrading a broker instance from 7.0.x to 7.0.y The procedure for upgrading AMQ Broker from one version of 7.0 to another is similar to the one for installation: you download an archive from the Customer Portal and then extract it. The following subsections describe how to upgrade a 7.0.x broker for different operating systems. Upgrading from 7.0.x to 7.0.y on Linux Upgrading from 7.0.x to 7.0.y on Windows 2.2.1.1. Upgrading from 7.0.x to 7.0.y on Linux The name of the archive that you download could differ from what is used in the following examples. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. The archive is kept in a compressed format. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.1.2. Upgrading from 7.0.x to 7.0.y on Windows Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.2. Upgrading a broker instance from 7.0.x to 7.1.0 AMQ Broker 7.1.0 includes configuration files and settings that were not included with versions. Upgrading a broker instance from 7.0.x to 7.1.0 requires adding these new files and settings to your existing 7.0.x broker instances. The following subsections describe how to upgrade a 7.0.x broker instance to 7.1.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.0.x to 7.1.0 on Linux Upgrading from 7.0.x to 7.1.0 on Windows 2.2.2.1. Upgrading from 7.0.x to 7.1.0 on Linux Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Linux, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by copying it to the home directory of the current user. Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.2.2. Upgrading from 7.0.x to 7.1.0 on Windows Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Windows, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.3. Upgrading a broker instance from 7.1.x to 7.2.0 AMQ Broker 7.2.0 includes configuration files and settings that were not included with 7.0.x versions. If you are running 7.0.x instances, you must first upgrade those broker instances from 7.0.x to 7.1.0 before upgrading to 7.2.0. The following subsections describe how to upgrade a 7.1.x broker instance to 7.2.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.1.x to 7.2.0 on Linux Upgrading from 7.1.x to 7.2.0 on Windows 2.2.3.1. Upgrading from 7.1.x to 7.2.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.3.2. Upgrading from 7.1.x to 7.2.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.4. Upgrading a broker instance from 7.2.x to 7.3.0 The following subsections describe how to upgrade a 7.2.x broker instance to 7.3.0 for different operating systems. 2.2.4.1. Resolve exception due to deprecated dispatch console Starting in version 7.3.0, AMQ Broker no longer ships with the Hawtio dispatch console plugin dispatch-hawtio-console.war . Previously, the dispatch console was used to manage AMQ Interconnect. However, AMQ Interconnect now uses its own, standalone web console. This change affects the upgrade procedures in the sections that follow. If you take no further action before upgrading your broker instance to 7.3.0, the upgrade process produces an exception that looks like the following: You can safely ignore the preceding exception without affecting the success of your upgrade. However, if you would prefer not to see this exception during your upgrade, you must first remove a reference to the Hawtio dispatch console plugin in the bootstrap.xml file of your existing broker instance. The bootstrap.xml file is in the {instance_directory}/etc/ directory of your broker instance. The following example shows some of the contents of the bootstrap.xml file for a AMQ Broker 7.2.4 instance: To avoid an exception when upgrading AMQ Broker to version 7.3.0, delete the line <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> , as shown in the preceding example. Then, save the modified bootstrap file and start the upgrade process, as described in the sections that follow. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.2.x to 7.3.0 on Linux Upgrading from 7.2.x to 7.3.0 on Windows 2.2.4.2. Upgrading from 7.2.x to 7.3.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.4.3. Upgrading from 7.2.x to 7.3.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file to set the JAVA_ARGS environment variable to reference the correct log manager version. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file to set the bootstrap class path start argument to reference the correct log manager version. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.5. Upgrading a broker instance from 7.3.0 to 7.4.0 The following subsections describe how to upgrade a 7.3.0 broker instance to 7.4.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.3.0 to 7.4.0 on Linux Upgrading from 7.3.0 to 7.4.0 on Windows 2.2.5.1. Upgrading from 7.3.0 to 7.4.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.5.2. Upgrading from 7.3.0 to 7.4.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.3. Upgrading a broker instance from 7.4.0 to 7.4.x Important AMQ Broker 7.4 has been designated as a Long Term Support (LTS) release version. Bug fixes and security advisories will be made available for AMQ Broker 7.4 in a series of micro releases (7.4.1, 7.4.2, and so on) for a period of at least 12 months. This means that you will be able to get recent bug fixes and security advisories for AMQ Broker without having to upgrade to a new minor release. For more information, see Long Term Support for AMQ Broker . Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . The following subsections describe how to upgrade a 7.4.0 broker instance to 7.4.x for different operating systems. Upgrading from 7.4.0 to 7.4.x on Linux Upgrading from 7.4.0 to 7.4.x on Windows 2.3.1. Upgrading from 7.4.0 to 7.4.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.3.2. Upgrading from 7.4.0 to 7.4.x on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.4. Upgrading a broker instance from 7.4.x to 7.5.0 The following subsections describe how to upgrade a 7.4.x broker instance to 7.5.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.4.x to 7.5.0 on Linux Upgrading from 7.4.x to 7.5.0 on Windows 2.4.1. Upgrading from 7.4.x to 7.5.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.4.2. Upgrading from 7.4.x to 7.5.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.5. Upgrading a broker instance from 7.5.0 to 7.6.0 The following subsections describe how to upgrade a 7.5.0 broker instance to 7.6.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.5.0 to 7.6.0 on Linux Upgrading from 7.5.0 to 7.6.0 on Windows 2.5.1. Upgrading from 7.5.0 to 7.6.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.5.2. Upgrading from 7.5.0 to 7.6.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.6. Upgrading a broker instance from 7.6.0 to 7.7.0 The following subsections describe how to upgrade a 7.6.0 broker instance to 7.7.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.6.0 to 7.7.0 on Linux Upgrading from 7.6.0 to 7.7.0 on Windows 2.6.1. Upgrading from 7.6.0 to 7.7.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.6.2. Upgrading from 7.6.0 to 7.7.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.7. Upgrading a broker instance from 7.7.0 to 7.8.0 The following subsections describe how to upgrade a 7.7.0 broker instance to 7.8.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.7.0 to 7.8.0 on Linux Upgrading from 7.7.0 to 7.8.0 on Windows 2.7.1. Upgrading from 7.7.0 to 7.8.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.7.2. Upgrading from 7.7.0 to 7.8.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder amd select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9 <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.8. Upgrading a broker instance from 7.8.0 to 7.9.0 The following subsections describe how to upgrade a 7.8.0 broker instance to 7.9.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.8.0 to 7.9.0 on Linux Upgrading from 7.8.0 to 7.9.0 on Windows Note The format of the journal used by the broker changed in version 7.9.0. Therefore, after you upgrade a broker to version 7.9.0, you cannot downgrade to a version. 2.8.1. Upgrading from 7.8.0 to 7.9.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.8.2. Upgrading from 7.8.0 to 7.9.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder amd select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9 <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. | [
"sudo chown amq-broker:amq-broker jboss-amq-7.x.x.redhat-1.zip",
"sudo mv jboss-amq-7.x.x.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME='/opt/redhat/jboss-amq-7.x.x-redhat-1'",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"ARTEMIS_HOME=\" <7.1.0_install_dir> \"",
"ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"",
"-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml",
"<7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp",
"cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/",
"cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/",
"<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>",
"<app url=\"console\" war=\"console.war\"/>",
"<broker_instance_dir> /bin/artemis run",
"> <broker_instance_dir> \\bin\\artemis-service.exe stop",
"ARTEMIS_HOME=\" <7.1.0_install_dir> \"",
"ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"",
"-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml",
"> <7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp",
"> cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/",
"> cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/",
"<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>",
"<app url=\"console\" war=\"console.war\"/>",
"> <broker_instance_dir> \\bin\\artemis-service.exe start",
"sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip",
"sudo mv amq-7.x.x.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"2019-04-11 18:00:41,334 WARN [org.eclipse.jetty.webapp.WebAppContext] Failed startup of context o.e.j.w.WebAppContext@1ef3efa8{/dispatch-hawtio-console,null,null}{/opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war}: java.io.FileNotFoundException: /opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war.",
"<broker xmlns=\"http://activemq.org/schema\"> . <!-- The web server is only bound to localhost by default --> <web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web> </broker>",
"sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip",
"sudo mv amq-7.x.x.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS= <install_dir> \\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar",
"<startargument>Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar</startargument>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.x.x.redhat-1.zip",
"sudo mv amq-broker-7.x.x.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.1.Final-redhat-00001.jar",
"<app url=\"metrics\" war=\"metrics.war\"/>",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS= -Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar</startargument>",
"<app url=\"metrics\" war=\"metrics.war\"/>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.4.x.redhat-1.zip",
"sudo mv amq-broker-7.4.x.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.4.x.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.4.x-redhat-1'",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.5.0.redhat-1.zip",
"sudo mv amq-broker-7.5.0.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.5.0.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.5.0-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00001.jar",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar</startargument>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.6.0.redhat-1.zip",
"sudo mv amq-broker-7.6.0.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.6.0.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.6.0-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.7.0.redhat-1.zip",
"sudo mv amq-broker-7.7.0.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.7.0.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.7.0-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar",
"loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource",
".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>",
"loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource",
".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.9.3.redhat-1.zip",
"sudo mv amq-broker-7.9.3.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.9.3.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.9.3-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar",
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>",
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"sudo chown amq-broker:amq-broker amq-broker-7.9.3.redhat-1.zip",
"sudo mv amq-broker-7.9.3.redhat-1.zip /opt/redhat",
"su - amq-broker cd /opt/redhat unzip amq-broker-7.9.3.redhat-1.zip",
"<broker_instance_dir> /bin/artemis stop",
"cp -r <broker_instance_dir> ~/",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"ARTEMIS_HOME='/opt/redhat/amq-broker-7.9.3-redhat-1'",
"-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar",
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>",
"<broker_instance_dir> /bin/artemis run",
"INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mes INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live sage Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes",
"ARTEMIS_HOME= <install_dir>",
"JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar",
"<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>",
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/managing_amq_broker/patching |
Chapter 4. November 2024 | Chapter 4. November 2024 4.1. Product-wide updates 4.1.1. Basic Authorization reaches End-Of-Life Important Red Hat is implementing a crucial security enhancement on our cloud service APIs on console.redhat.com. Beginning December 31, 2024, we will discontinue support for basic authorization as a route of connecting to our services' APIs. This includes the Insights client basic authorization option, which is described as follows: Insights client Basic authentication is not the default authentication mechanism but has been an option for a select set of workflows. If your hosts are using Basic authentication, ensure you switch to certificate authentication instead. This is necessary for those hosts to continue to connect to Insights. Hybrid Cloud Console APIs The Red Hat Hybrid Cloud Console is integrating service accounts with User Access functionality, to support you in transitioning from Basic authentication to token-based authentication. This will provide you with granular control over access permissions and enhance security. See the following article for more details: Transition of Red Hat Hybrid Cloud Console APIs from Basic authentication to token-based authentication using service accounts 4.1.2. Published blogs and resources Video: OpenShift incident detection by John Spinks (November 5, 2024) Article: Ability to export a list of registered inventory systems (November 26, 2024) Blog: Red Hat OpenShift Incident Detection uses analytics to help you quickly detect issues by McKibbin Brady (November 12, 2024) Updated cheat sheet: Red Hat Insights API Cheat Sheet by Jerome Marc (November 26, 2024) 4.2. Red Hat Insights for Red Hat Enterprise Linux 4.3. General We are proud to announce the Insights proxy service. Insights proxy is a lightweight intermediary solution, designed to simplify connectivity between your environment and Insights services. This solution offers you enhanced security, seamless integration, and improved performance. It accomplishes this by managing data traffic between your systems and Red Hat services. It is ideal in high-security environments because it eliminates the need for a direct Internet connection and exerts control over data transfers. See the following for more details: Insights proxy Technical Preview 4.4. Advisor New recommendations The Insights advisor service now detects and recommends solutions for the following issues: System reboot fails after the leapp upgrade due to a regression bug in leapp Filesystems cannot be auto mounted during booting when the mount point is a symbolic link in the /etc/fstab The PostgresSQL database performance is not optimal because the best practices are not applied The filesystem type that is not supported by SAP is being used for the running SAP HANA Kernel panic will occur on edge computing systems after reboot when closing a removed sg device due to a known bug in the default kernel PCP service fails to start on edge computing systems because the pcp package is corrupted Setting the LD_LIBRARY_PATH variable in the global environment files is not recommended LVM is malfunctioning on edge computing systems because the lvm2 package is corrupted The leapp upgrade fails when the /var/log/ directory is a symbolic link 4.5. Compliance API version 2 is now live A refresh of the compliance API version 2 is now available. The refresh includes the following enhancements: Adding one or more systems to an existing policy using the Insights client command line interface (CLI) Creating multiple policy types for the same major RHEL version 4.6. Image Builder Support for RHEL 10 public beta Image Builder can now build images of RHEL 10, public beta for testing and evaluation. This includes support for physical, all hybrid cloud image types, and Microsoft Windows Subsystem for Linux (WSL) images. Support for generation 2 Azure images Image Builder has added support for Azure's generation 2 image types. A hybrid boot loader approach accommodates both generation 1 and 2. When importing the image into Azure, you are able to choose which version. This is an important decision since generation version is immutable. Azure generation 2 images feature increased memory, OS disks > 2 tebibyte (TiB), and virtualized persistent memory (vPMEM). The images create a Unified Extensible Firmware Interface (UEFI) boot loader compatible with Azure's Secure Boot and Trusted Platform Module (TPM) implementations. To learn more about Azure's generation 2 images, see the following: Support for Generation 2 VMs on Azure Incorporation of compliance's tailored policies Image Builder can now incorporate tailored security policies generated by the compliance service. This allows you to create your own custom security compliance requirements. The integration of Image Builder and compliance helps you to configure, deploy, and report on regulatory compliance requirements with minimal friction. You can use this feature by enabling preview mode. 4.7. Inventory Service account authentication for Ansible inventory plugin The latest Insights collection is now included in the execution environment container images, for Ansible Automation Platform (AAP) (e.g. the default ansible-automation-platform-25/ee-supported-rhel8 in AAP 2.5). This update enhances your service accounts with support for token-based authentication. Pull the latest image in your current AAP environment to start using this feature. See the following for more details: Red Hat Insights collection Ansible Automation Platform supported execution environment Note Red Hat Hybrid Cloud Console APIs are transitioning from Basic authentication to token-based authentication using service accounts. See the following for more details: Transition of Red Hat Hybrid Cloud Console APIs from Basic authentication to token-based authentication via service accounts 4.8. Insights for OpenShift Container Platform 4.8.1. Advisor Rapid recommendations Rapid recommendations is an enhancement for the conditional gathering functionality. It enables the Insights operator to be dynamically updated with data collection specifications. This enables us to quickly deliver new recommendations without updating the operator or cluster version. 4.8.2. Cost Management Cost analysis of OpenShift Virtualization We are releasing this feature as a preview that includes the cost of CPU and memory. Cost Management now calculates the cost of your virtual machines running on OpenShift Virtualization. Cost data is displayed for the following: All virtual machines All operating systems (including third-party) All environments (OpenShift on-premise, ROSA, and so on). Additionally, a new virtualization tab has been added to the OpenShift cluster, node and project views. Storage costs will be calculated in the near future. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/release_notes/november-2024 |
Chapter 31. Publishing a Service | Chapter 31. Publishing a Service Abstract When you want to deploy a JAX-WS service as a standalone Java application, you must explicitly implement the code that publishes the service provider. 31.1. When to Publish a Service Apache CXF provides a number of ways to publish a service as a service provider. How you publish a service depends on the deployment environment you are using. Many of the containers supported by Apache CXF do not require writing logic for publishing endpoints. There are two exceptions: deploying a server as a standalone Java application deploying a server into an OSGi container without Blueprint For detailed information in deploying applications into the supported containers see Part IV, "Configuring Web Service Endpoints" . 31.2. APIs Used to Publish a Service Overview The javax.xml.ws.Enddpoint class does the work of publishing a JAX-WS service provider. To publishing an endpoint do the following: Create an Endpoint object for your service provider. Publish the endpoint. Stop the endpoint when application shuts down. The Endpoint class provides methods for creating and publishing service providers. It also provides a method that can create and publish a service provider in a single method call. Instantiating an service provider A service provider is instantiated using an Endpoint object. You instantiate an Endpoint object for your service provider using one of the following methods: static Endpoint create Object implementor This create() method returns an Endpoint for the specified service implementation. The Endpoint object is created using the information provided by the implementation class' javax.xml.ws.BindingType annotation, if it is present. If the annotation is not present, the Endpoint uses a default SOAP 1.1/HTTP binding. static Endpoint create URI bindingID Object implementor This create() method returns an Endpoint object for the specified implementation object using the specified binding. This method overrides the binding information provided by the javax.xml.ws.BindingType annotation, if it is present. If the bindingID cannot be resolved, or it is null , the binding specified in the javax.xml.ws.BindingType is used to create the Endpoint . If neither the bindingID or the javax.xml.ws.BindingType can be used, the Endpoint is created using a default SOAP 1.1/HTTP binding. static Endpoint publish String address Object implementor The publish() method creates an Endpoint object for the specified implementation, and publishes it. The binding used for the Endpoint object is determined by the URL scheme of the provided address . The list of bindings available to the implementation are scanned for a binding that supports the URL scheme. If one is found the Endpoint object is created and published. If one is not found, the method fails. Using publish() is the same as invoking one of the create() methods, and then invoking the publish() method used in ???TITLE??? . Important The implementation object passed to any of the Endpoint creation methods must either be an instance of a class annotated with javax.jws.WebService and meeting the requirements for being an SEI implementation or it must be an instance of a class annotated with javax.xml.ws.WebServiceProvider and implementing the Provider interface. Publishing a service provider You can publish a service provider using either of the following Endpoint methods: publish String address This publish() method publishes the service provider at the address specified. Important The address 's URL scheme must be compatible with one of the service provider's bindings. publish Object serverContext This publish() method publishes the service provider based on the information provided in the specified server context. The server context must define an address for the endpoint, and the context must also be compatible with one of the service provider's available bindings. Stopping a published service provider When the service provider is no longer needed you should stop it using its stop() method. The stop() method, shown in Example 31.1, "Method for Stopping a Published Endpoint" , shuts down the endpoint and cleans up any resources it is using. Example 31.1. Method for Stopping a Published Endpoint stop Important Once the endpoint is stopped it cannot be republished. 31.3. Publishing a Service in a Plain Java Application Overview When you want to deploy your application as a plain java application you need to implement the logic for publishing your endpoints in the application's main() method. Apache CXF provides you two options for writing your application's main() method. use the main() method generated by the wsdl2java tool write a custom main() method that publishes the endpoints Generating a Server Mainline The code generators -server flag makes the tool generate a simple server mainline. The generated server mainline, as shown in Example 31.2, "Generated Server Mainline" , publishes one service provider for each port element in the specified WSDL contract. For more information see Section 44.2, "cxf-codegen-plugin" . Example 31.2, "Generated Server Mainline" shows a generated server mainline. Example 31.2. Generated Server Mainline The code in Example 31.2, "Generated Server Mainline" does the following: Instantiates a copy of the service implementation object. Creates the address for the endpoint based on the contents of the address child of the wsdl:port element in the endpoint's contract. Publishes the endpoint. Writing a Server Mainline If you used the Java first development model or you do not want to use the generated server mainline you can write your own. To write your server mainline you must do the following: the section called "Instantiating an service provider" an javax.xml.ws.Endpoint object for the service provider. Create an optional server context to use when publishing the service provider. the section called "Publishing a service provider" the service provider using one of the publish() methods. Stop the service provider when the application is ready to exit. Example 31.3, "Custom Server Mainline" shows the code for publishing a service provider. Example 31.3. Custom Server Mainline The code in Example 31.3, "Custom Server Mainline" does the following: Instantiates a copy of the service's implementation object. Creates an unpublished Endpoint for the service implementation. Publishes the service provider at http://localhost:9000/SoapContext/SoapPort . Loops until the server should be shutdown. Stops the published endpoint. 31.4. Publishing a Service in an OSGi Container Overview When you develop an application that will be deployed into an OSGi container, you need to coordinate the publishing and stopping of your endpoints with the life-cycle of the bundle in which it is packaged. You want your endpoints published when the bundle is started and you want the endpoints stopped when the bundle is stopped. You tie your endpoints life-cycle to the bundle's life-cycle by implementing an OSGi bundle activator. A bundle activator is used by the OSGi container to create the resource for a bundle when it is started. The container also uses the bundle activator to clean up the bundles resources when it is stopped. The bundle activator interface You create a bundle activator for your application by implementing the org.osgi.framework.BundleActivator interface. The BundleActivator interface, shown in Example 31.4, "Bundle Activator Interface" , it has two methods that need to be implemented. Example 31.4. Bundle Activator Interface The start() method is called by the container when it starts the bundle. This is where you instantiate and publish the endpoints. The stop() method is called by the container when it stops the bundle. This is where you would stop the endpoints. Implementing the start method The bundle activator's start method is where you publish your endpoints. To publish your endpoints the start method must do the following: the section called "Instantiating an service provider" an javax.xml.ws.Endpoint object for the service provider. Create an optional server context to use when publishing the service provider. the section called "Publishing a service provider" the service provider using one of the publish() methods. Example 31.5, "Bundle Activator Start Method for Publishing an Endpoint" shows code for publishing a service provider. Example 31.5. Bundle Activator Start Method for Publishing an Endpoint The code in Example 31.5, "Bundle Activator Start Method for Publishing an Endpoint" does the following: Instantiates a copy of the service's implementation object. Creates an unpublished Endpoint for the service implementation. Publish the service provider at http://localhost:9000/SoapContext/SoapPort . Implementing the stop method The bundle activator's stop method is where you clean up the resources used by your application. Its implementation should include logic for stopping all of the endpoint's published by the application. Example 31.6, "Bundle Activator Stop Method for Stopping an Endpoint" shows a stop method for stopping a published endpoint. Example 31.6. Bundle Activator Stop Method for Stopping an Endpoint Informing the container You must add inform the container that the application's bundle includes a bundle activator. You do this by adding the Bundle-Activator property to the bundle's manifest. This property tells the container which class in the bundle to use when activating the bundle. Its value is the fully qualified name of the class implementing the bundle activator. Example 31.7, "Bundle Activator Manifest Entry" shows a manifest entry for a bundle whose activator is implemented by the class com.widgetvendor.osgi.widgetActivator . Example 31.7. Bundle Activator Manifest Entry | [
"package org.apache.hello_world_soap_http; import javax.xml.ws.Endpoint; public class GreeterServer { protected GreeterServer() throws Exception { System.out.println(\"Starting Server\"); Object implementor = new GreeterImpl(); String address = \"http://localhost:9000/SoapContext/SoapPort\"; Endpoint.publish(address, implementor); } public static void main(String args[]) throws Exception { new GreeterServer(); System.out.println(\"Server ready...\"); Thread.sleep(5 * 60 * 1000); System.out.println(\"Server exiting\"); System.exit(0); } }",
"package org.apache.hello_world_soap_http; import javax.xml.ws.Endpoint; public class GreeterServer { protected GreeterServer() throws Exception { } public static void main(String args[]) throws Exception { GreeterImpl impl = new GreeterImpl(); Endpoint endpt.create(impl); endpt.publish(\"http://localhost:9000/SoapContext/SoapPort\"); boolean done = false; while(!done) { } endpt.stop(); System.exit(0); } }",
"interface BundleActivator { public void start(BundleContext context) throws java.lang.Exception; public void stop(BundleContext context) throws java.lang.Exception; }",
"package com.widgetvendor.osgi; import javax.xml.ws.Endpoint; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; public class widgetActivator implements BundleActivator { private Endpoint endpt; public void start(BundleContext context) { WidgetOrderImpl impl = new WidgetOrderImpl(); endpt = Endpoint.create(impl); endpt.publish(\"http://localhost:9000/SoapContext/SoapPort\"); } }",
"package com.widgetvendor.osgi; import javax.xml.ws.Endpoint; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; public class widgetActivator implements BundleActivator { private Endpoint endpt; public void stop(BundleContext context) { endpt.stop(); } }",
"Bundle-Activator: com.widgetvendor.osgi.widgetActivator"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXWSServicePublish |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of {osp_long} ({osp_acro}). When you create an issue for RHOSO or {osp_acro} documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/proc_providing-feedback-on-red-hat-documentation |
Troubleshooting | Troubleshooting Red Hat build of MicroShift 4.18 Troubleshooting common issues Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/index |
Chapter 7. Cluster topology and colocation | Chapter 7. Cluster topology and colocation Understand the topology needed and the factors to be considers for collocation for an edge cluster. For information on cluster topology, hyper convergence with OpenStack, collocating nodes On OpenStack, and limitations of OpenStack minimum configuration, see Ceph configuration overrides for HCI . For more information on colocation, see Colocation . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/edge_guide/cluster-topology_edge |
Chapter 1. Preparing for bare metal cluster installation | Chapter 1. Preparing for bare metal cluster installation 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization Getting started with OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 1.3. NIC partitioning for SR-IOV devices (Technology Preview) OpenShift Container Platform can be deployed on a server with a dual port network interface card (NIC). You can partition a single, high-speed dual port NIC into multiple virtual functions (VFs) and enable SR-IOV. Note Currently, it is not possible to assign virtual functions (VF) for system services such as OVN-Kubernetes and assign other VFs created from the same physical function (PF) to pods connected to the SR-IOV Network Operator. This feature supports the use of bonds for high availability with the Link Aggregation Control Protocol (LACP). Note Only one LACP can be declared by physical NIC. An OpenShift Container Platform cluster can be deployed on a bond interface with 2 VFs on 2 physical functions (PFs) using the following methods: Agent-based installer Note The minimum required version of nmstate is: 1.4.2-4 for RHEL 8 versions 2.2.7 for RHEL 9 versions Installer-provisioned infrastructure installation User-provisioned infrastructure installation Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Example: Bonds and SR-IOV dual-nic node network configuration Optional: Configuring host network interfaces for dual port NIC Bonding multiple SR-IOV network interfaces to a dual port NIC interface 1.4. Choosing a method to install OpenShift Container Platform on bare metal The OpenShift Container Platform installation program offers four methods for deploying a cluster: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is the recommended approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the agent-based installer for air-gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the agent-based installer first. Configuration is done with a commandline interface. This approach is ideal for air-gapped or restricted networks. Automated : You can deploy a cluster on installer-provisioned infrastructure and the cluster it maintains. The installer uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters with both connected or air-gapped or restricted networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters with both connected or air-gapped or restricted networks. The clusters have the following characteristics: Highly available infrastructure with no single points of failure is available by default. Administrators maintain control over what updates are applied and when. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.4.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on bare metal infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal by using installer provisioning. 1.4.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on bare metal infrastructure that you provision, by using one of the following methods: Installing a user-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal infrastructure that you provision. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. Installing a user-provisioned bare metal cluster with network customizations : You can install a bare metal cluster on user-provisioned infrastructure with network-customizations. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Most of the network customizations must be applied at the installation stage. Installing a user-provisioned bare metal cluster on a restricted network : You can install a user-provisioned bare metal cluster on a restricted or disconnected network by using a mirror registry. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_bare_metal/preparing-to-install-on-bare-metal |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 9.0 is distributed with the kernel version 5.14.0, which provides support for the following architectures at the minimum required version: AMD and Intel 64-bit architectures (x86-64-v2) The 64-bit ARM architecture (ARMv8.0-A) IBM Power Systems, Little Endian (POWER9) 64-bit IBM Z (z14) Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.0_release_notes/architectures |
Chapter 2. APIRequestCount [apiserver.openshift.io/v1] | Chapter 2. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the characteristics of the resource. status object status contains the observed state of the resource. 2.1.1. .spec Description spec defines the characteristics of the resource. Type object Property Type Description numberOfUsersToReport integer numberOfUsersToReport is the number of users to include in the report. If unspecified or zero, the default is ten. This is default is subject to change. 2.1.2. .status Description status contains the observed state of the resource. Type object Property Type Description conditions array conditions contains details of the current status of this API Resource. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } currentHour object currentHour contains request history for the current hour. This is porcelain to make the API easier to read by humans seeing if they addressed a problem. This field is reset on the hour. last24h array last24h contains request history for the last 24 hours, indexed by the hour, so 12:00AM-12:59 is in index 0, 6am-6:59am is index 6, etc. The index of the current hour is updated live and then duplicated into the requestsLastHour field. last24h[] object PerResourceAPIRequestLog logs request for various nodes. removedInRelease string removedInRelease is when the API will be removed. requestCount integer requestCount is a sum of all requestCounts across all current hours, nodes, and users. 2.1.3. .status.conditions Description conditions contains details of the current status of this API Resource. Type array 2.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.5. .status.currentHour Description currentHour contains request history for the current hour. This is porcelain to make the API easier to read by humans seeing if they addressed a problem. This field is reset on the hour. Type object Property Type Description byNode array byNode contains logs of requests per node. byNode[] object PerNodeAPIRequestLog contains logs of requests to a certain node. requestCount integer requestCount is a sum of all requestCounts across nodes. 2.1.6. .status.currentHour.byNode Description byNode contains logs of requests per node. Type array 2.1.7. .status.currentHour.byNode[] Description PerNodeAPIRequestLog contains logs of requests to a certain node. Type object Property Type Description byUser array byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. byUser[] object PerUserAPIRequestCount contains logs of a user's requests. nodeName string nodeName where the request are being handled. requestCount integer requestCount is a sum of all requestCounts across all users, even those outside of the top 10 users. 2.1.8. .status.currentHour.byNode[].byUser Description byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. Type array 2.1.9. .status.currentHour.byNode[].byUser[] Description PerUserAPIRequestCount contains logs of a user's requests. Type object Property Type Description byVerb array byVerb details by verb. byVerb[] object PerVerbAPIRequestCount requestCounts requests by API request verb. requestCount integer requestCount of requests by the user across all verbs. userAgent string userAgent that made the request. The same user often has multiple binaries which connect (pods with many containers). The different binaries will have different userAgents, but the same user. In addition, we have userAgents with version information embedded and the userName isn't likely to change. username string userName that made the request. 2.1.10. .status.currentHour.byNode[].byUser[].byVerb Description byVerb details by verb. Type array 2.1.11. .status.currentHour.byNode[].byUser[].byVerb[] Description PerVerbAPIRequestCount requestCounts requests by API request verb. Type object Property Type Description requestCount integer requestCount of requests for verb. verb string verb of API request (get, list, create, etc... ) 2.1.12. .status.last24h Description last24h contains request history for the last 24 hours, indexed by the hour, so 12:00AM-12:59 is in index 0, 6am-6:59am is index 6, etc. The index of the current hour is updated live and then duplicated into the requestsLastHour field. Type array 2.1.13. .status.last24h[] Description PerResourceAPIRequestLog logs request for various nodes. Type object Property Type Description byNode array byNode contains logs of requests per node. byNode[] object PerNodeAPIRequestLog contains logs of requests to a certain node. requestCount integer requestCount is a sum of all requestCounts across nodes. 2.1.14. .status.last24h[].byNode Description byNode contains logs of requests per node. Type array 2.1.15. .status.last24h[].byNode[] Description PerNodeAPIRequestLog contains logs of requests to a certain node. Type object Property Type Description byUser array byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. byUser[] object PerUserAPIRequestCount contains logs of a user's requests. nodeName string nodeName where the request are being handled. requestCount integer requestCount is a sum of all requestCounts across all users, even those outside of the top 10 users. 2.1.16. .status.last24h[].byNode[].byUser Description byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. Type array 2.1.17. .status.last24h[].byNode[].byUser[] Description PerUserAPIRequestCount contains logs of a user's requests. Type object Property Type Description byVerb array byVerb details by verb. byVerb[] object PerVerbAPIRequestCount requestCounts requests by API request verb. requestCount integer requestCount of requests by the user across all verbs. userAgent string userAgent that made the request. The same user often has multiple binaries which connect (pods with many containers). The different binaries will have different userAgents, but the same user. In addition, we have userAgents with version information embedded and the userName isn't likely to change. username string userName that made the request. 2.1.18. .status.last24h[].byNode[].byUser[].byVerb Description byVerb details by verb. Type array 2.1.19. .status.last24h[].byNode[].byUser[].byVerb[] Description PerVerbAPIRequestCount requestCounts requests by API request verb. Type object Property Type Description requestCount integer requestCount of requests for verb. verb string verb of API request (get, list, create, etc... ) 2.2. API endpoints The following API endpoints are available: /apis/apiserver.openshift.io/v1/apirequestcounts DELETE : delete collection of APIRequestCount GET : list objects of kind APIRequestCount POST : create an APIRequestCount /apis/apiserver.openshift.io/v1/apirequestcounts/{name} DELETE : delete an APIRequestCount GET : read the specified APIRequestCount PATCH : partially update the specified APIRequestCount PUT : replace the specified APIRequestCount /apis/apiserver.openshift.io/v1/apirequestcounts/{name}/status GET : read status of the specified APIRequestCount PATCH : partially update status of the specified APIRequestCount PUT : replace status of the specified APIRequestCount 2.2.1. /apis/apiserver.openshift.io/v1/apirequestcounts HTTP method DELETE Description delete collection of APIRequestCount Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind APIRequestCount Table 2.2. HTTP responses HTTP code Reponse body 200 - OK APIRequestCountList schema 401 - Unauthorized Empty HTTP method POST Description create an APIRequestCount Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body APIRequestCount schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 202 - Accepted APIRequestCount schema 401 - Unauthorized Empty 2.2.2. /apis/apiserver.openshift.io/v1/apirequestcounts/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the APIRequestCount HTTP method DELETE Description delete an APIRequestCount Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIRequestCount Table 2.9. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIRequestCount Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIRequestCount Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body APIRequestCount schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 401 - Unauthorized Empty 2.2.3. /apis/apiserver.openshift.io/v1/apirequestcounts/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the APIRequestCount HTTP method GET Description read status of the specified APIRequestCount Table 2.16. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIRequestCount Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIRequestCount Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body APIRequestCount schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/metadata_apis/apirequestcount-apiserver-openshift-io-v1 |
2.7. Package Layout | 2.7. Package Layout Each Software Collection's layout consists of the metapackage, which installs a subset of other packages, and a number of the Software Collection's packages, which are installed within the Software Collection namespace. 2.7.1. Metapackage Each Software Collection includes a metapackage, which installs a subset of the Software Collection's packages that are essential for the user to perform most common tasks with the Software Collection. For example, the essential packages can provide the Perl language interpreter, but no Perl extension modules. The metapackage contains a basic file system hierarchy and delivers a number of the Software Collection's scriptlets. The purpose of the metapackage is to make sure that all essential packages in the Software Collection are properly installed and that it is possible to enable the Software Collection. The metapackage produces the following packages that are also part of the Software Collection: The main package: %name The main package in the Software Collection contains dependencies of the base packages, which are included in the Software Collection. The main package does not contain any files. When specifying dependencies for your Software Collection's packages, ensure that no other package in your Software Collection depends on the main package. The purpose of the main package is to install only those packages that are essential for the user to perform most common tasks with the Software Collection. Normally, the main package does not specify any build time dependencies (for instance, packages that are only build time dependencies of another Software Collection's packages). For example, if the name of the Software Collection is myorganization-ruby193 , then the main package macro is expanded to: The runtime subpackage: %name -runtime The runtime subpackage in the Software Collection owns the Software Collection's file system and delivers the Software Collection's scriptlets. This package needs to be installed for the user to be able to use the Software Collection. For example, if the name of the Software Collection is myorganization-ruby193 , then the runtime subpackage macro is expanded to: The build subpackage: %name -build The build subpackage in the Software Collection delivers the Software Collection's build configuration. It contains RPM macros needed for building packages into the Software Collection. The build subpackage is optional and can be excluded from the Software Collection. For example, if the name of the Software Collection is myorganization-ruby193 , then the build subpackage macro is expanded to: The contents of the myorganization-ruby193-build subpackage are shown below: The syspaths subpackage: %name -syspaths The syspaths subpackage in the Software Collection provides an optional way to install convenient shell wrappers and symbolic links into the standard path, thus altering the base system installation, but making binary files in the Software Collection packages easier to use. For example, if the name of the Software Collection is myorganization-ruby193 , then the syspaths subpackage macro is expanded to: For more information about the syspaths subpackage, see Section 3.3, "Providing syspaths Subpackages" . The scldevel subpackage: %name -scldevel The scldevel subpackage in the %name Software Collection contains development files, which are useful when developing packages of another Software Collection that depends on the %name Software Collection. The scldevel subpackage is optional and can be excluded from the %name Software Collection. For example, if the name of the Software Collection is myorganization-ruby193 , then the scldevel subpackage macro is expanded to: For more information about the scldevel subpackage, see Section 4.1, "Providing an scldevel Subpackage" . 2.7.2. Creating a Metapackage When creating a new metapackage: Define the following macros at the top of the metapackage spec file, above the %scl_package macro: scl_name_prefix that specifies the provider's name to be used as a prefix in your Software Collection's name, for example, myorganization -. This is different from _scl_prefix , which specifies the root of your Software Collection but also uses the provider's name. See Section 2.4, "The Software Collection Prefix" for more information. scl_name_base that specifies the base name of your Software Collection, for example, ruby . scl_name_version that specifies the version of your Software Collection, for example, 193 . You are advised to define a Software Collection macro nfsmountable that changes the location of configuration and state files and makes your Software Collection usable over NFS. For more information, see Section 3.1, "Using Software Collections over NFS" . Consider specifying all packages in your Software Collection that are essential for the Software Collection run time as dependencies of the metapackage. That way you can ensure that the packages are installed with the Software Collection metapackage. You are advised to add Requires: scl-utils-build to the build subpackage. You are not required to use conditionals for Software Collection-specific macros in the metapackage. Include any path redefinition that the packages in your Software Collection may require in the enable scriptlet. For information on commonly used path redefinitions, see Section 2.9, "Commonly Used Path Redefinitions" . Always make sure that the metapackage contains the %setup macro in the %prep section, otherwise building the Software Collection will fail. If you do not need to use a particular option with the %setup macro, add the %setup -c -T command to the %prep section. This is because the %setup macro defines and creates the %buildsubdir directory, which is normally used for storing temporary files at build time. If you do not define %setup in your Software Collection packages, files in the %buildsubdir directory will be overwritten, causing the build to fail. Add any macros you need to use to the macros.%{scl}-config file in the build subpackage. Example of the Metapackage To get an idea of what a typical metapackage for a Software Collection named myorganization-ruby193 looks like, see the following example: %global scl_name_prefix myorganization- %global scl_name_base ruby %global scl_name_version 193 %global scl %{scl_name_prefix}%{scl_name_base}%{scl_name_version} # Optional but recommended: define nfsmountable %global nfsmountable 1 %global _scl_prefix /opt/myorganization %scl_package %scl Summary: Package that installs %scl Name: %scl_name Version: 1 Release: 1%{?dist} License: GPLv2+ Requires: %{scl_prefix}less BuildRequires: scl-utils-build %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build %description build Package shipping essential configuration macros to build %scl Software Collection. # This is only needed when you want to provide an optional scldevel subpackage %package scldevel Summary: Package shipping development files for %scl %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection. %prep %setup -c -T %install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PATH="%{_bindir}:%{_sbindir}\USD{PATH:+:\USD{PATH}}" export LD_LIBRARY_PATH="%{_libdir}\USD{LD_LIBRARY_PATH:+:\USD{LD_LIBRARY_PATH}}" export MANPATH="%{_mandir}:\USD{MANPATH:-}" export PKG_CONFIG_PATH="%{_libdir}/pkgconfig\USD{PKG_CONFIG_PATH:+:\USD{PKG_CONFIG_PATH}}" EOF # This is only needed when you want to provide an optional scldevel subpackage cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF # Install the generated man page mkdir -p %{buildroot}%{_mandir}/man7/ install -p -m 644 %{scl_name}.7 %{buildroot}%{_mandir}/man7/ %files %files runtime -f filelist %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %files scldevel %{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel %changelog * Fri Aug 30 2013 John Doe <[email protected]> 1-1 - Initial package | [
"myorganization-ruby193",
"myorganization-ruby193-runtime",
"myorganization-ruby193-build",
"cat /etc/rpm/macros.ruby193-config %scl myorganization-ruby193",
"myorganization-ruby193-syspaths",
"myorganization-ruby193-scldevel",
"%global scl_name_prefix myorganization- %global scl_name_base ruby %global scl_name_version 193 %global scl %{scl_name_prefix}%{scl_name_base}%{scl_name_version} Optional but recommended: define nfsmountable %global nfsmountable 1 %global _scl_prefix /opt/myorganization %scl_package %scl Summary: Package that installs %scl Name: %scl_name Version: 1 Release: 1%{?dist} License: GPLv2+ Requires: %{scl_prefix}less BuildRequires: scl-utils-build %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build %description build Package shipping essential configuration macros to build %scl Software Collection. This is only needed when you want to provide an optional scldevel subpackage %package scldevel Summary: Package shipping development files for %scl %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection. %prep %setup -c -T %install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PATH=\"%{_bindir}:%{_sbindir}\\USD{PATH:+:\\USD{PATH}}\" export LD_LIBRARY_PATH=\"%{_libdir}\\USD{LD_LIBRARY_PATH:+:\\USD{LD_LIBRARY_PATH}}\" export MANPATH=\"%{_mandir}:\\USD{MANPATH:-}\" export PKG_CONFIG_PATH=\"%{_libdir}/pkgconfig\\USD{PKG_CONFIG_PATH:+:\\USD{PKG_CONFIG_PATH}}\" EOF This is only needed when you want to provide an optional scldevel subpackage cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF Install the generated man page mkdir -p %{buildroot}%{_mandir}/man7/ install -p -m 644 %{scl_name}.7 %{buildroot}%{_mandir}/man7/ %files %files runtime -f filelist %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %files scldevel %{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel %changelog * Fri Aug 30 2013 John Doe <[email protected]> 1-1 - Initial package"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Package_Layout |
Chapter 2. Configuring logging | Chapter 2. Configuring logging This chapter describes how to configure logging for various Ceph subsystems. Important Logging is resource intensive. Also, verbose logging can generate a huge amount of data in a relatively short time. If you are encountering problems in a specific subsystem of the cluster, enable logging only of that subsystem. See Section 2.1, "Ceph subsystems" for more information. In addition, consider setting up a rotation of log files. See Section 2.4, "Accelerating log rotation" for details. Once you fix any problems you encounter, change the subsystems log and memory levels to their default values. See Appendix A, Ceph subsystems default logging level values for a list of all Ceph subsystems and their default values. You can configure Ceph logging by: Using the ceph command at runtime. This is the most common approach. See Section 2.2, "Configuring logging at runtime" for details. Updating the Ceph configuration file. Use this approach if you are encountering problems when starting the cluster. See Section 2.3, "Configuring logging in configuration file" for details. Prerequisites A running Red Hat Ceph Storage cluster. 2.1. Ceph subsystems This section contains information about Ceph subsystems and their logging levels. Understanding Ceph Subsystems and Their Logging Levels Ceph consists of several subsystems. Each subsystem has a logging level of its: Output logs that are stored by default in /var/log/ceph/ CLUSTER_FSID / directory (log level) Logs that are stored in a memory cache (memory level) In general, Ceph does not send logs stored in memory to the output logs unless: A fatal signal is raised An assert in source code is triggered You request it You can set different values for each of these subsystems. Ceph logging levels operate on a scale of 1 to 20 , where 1 is terse and 20 is verbose. Use a single value for the log level and memory level to set them both to the same value. For example, debug_osd = 5 sets the debug level for the ceph-osd daemon to 5 . To use different values for the output log level and the memory level, separate the values with a forward slash ( / ). For example, debug_mon = 1/5 sets the debug log level for the ceph-mon daemon to 1 and its memory log level to 5 . Table 2.1. Ceph Subsystems and the Logging Default Values Subsystem Log Level Memory Level Description asok 1 5 The administration socket auth 1 5 Authentication client 0 5 Any application or library that uses librados to connect to the cluster bluestore 1 5 The BlueStore OSD backend journal 1 5 The OSD journal mds 1 5 The Metadata Servers monc 0 5 The Monitor client handles communication between most Ceph daemons and Monitors mon 1 5 Monitors ms 0 5 The messaging system between Ceph components osd 0 5 The OSD Daemons paxos 0 5 The algorithm that Monitors use to establish a consensus rados 0 5 Reliable Autonomic Distributed Object Store, a core component of Ceph rbd 0 5 The Ceph Block Devices rgw 1 5 The Ceph Object Gateway Example Log Outputs The following examples show the type of messages in the logs when you increase the verbosity for the Monitors and OSDs. Monitor Debug Settings Example Log Output of Monitor Debug Settings OSD Debug Settings Example Log Output of OSD Debug Settings Additional Resources Configuring logging at runtime Configuring logging in configuration file 2.2. Configuring logging at runtime You can configure the logging of Ceph subsystems at system runtime to help troubleshoot any issues that might occur. Prerequisites A running Red Hat Ceph Storage cluster. Access to Ceph debugger. Procedure To activate the Ceph debugging output, dout() , at runtime: Replace: TYPE with the type of Ceph daemons ( osd , mon , or mds ) ID with a specific ID of the Ceph daemon. Alternatively, use * to apply the runtime setting to all daemons of a particular type. SUBSYSTEM with a specific subsystem. VALUE with a number from 1 to 20 , where 1 is terse and 20 is verbose. For example, to set the log level for the OSD subsystem on the OSD named osd.0 to 0 and the memory level to 5: To see the configuration settings at runtime: Log in to the host with a running Ceph daemon, for example, ceph-osd or ceph-mon . Display the configuration: Syntax Example Additional Resources See Ceph subsystems for details. See Configuration logging in configuration file for details. The Ceph Debugging and Logging Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 6. 2.3. Configuring logging in configuration file Configure Ceph subsystems to log informational, warning, and error messages to the log file. You can specify the debugging level in the Ceph configuration file, by default /etc/ceph/ceph.conf . Prerequisites A running Red Hat Ceph Storage cluster. Procedure To activate Ceph debugging output, dout() at boot time, add the debugging settings to the Ceph configuration file. For subsystems common to each daemon, add the settings under the [global] section. For subsystems for particular daemons, add the settings under a daemon section, such as [mon] , [osd] , or [mds] . Example Additional Resources Ceph subsystems Configuring logging at runtime The Ceph Debugging and Logging Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 6 2.4. Accelerating log rotation Increasing debugging level for Ceph components might generate a huge amount of data. If you have almost full disks, you can accelerate log rotation by modifying the Ceph log rotation file at /etc/logrotate.d/ceph-<fsid> . The Cron job scheduler uses this file to schedule log rotation. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Add the size setting after the rotation frequency to the log rotation file: For example, to rotate a log file when it reaches 500 MB: Note The size value can be expressed as '500 MB' or '500M'. Open the crontab editor: Add an entry to check the /etc/logrotate.d/ceph-<fsid> file. For example, to instruct Cron to check /etc/logrotate.d/ceph-<fsid> every 30 minutes: 2.5. Creating and collecting operation logs for Ceph Object Gateway User identity information is added to the operation log output. This is used to enable customers to access this information for auditing of S3 access. Track user identities reliably by S3 request in all versions of the Ceph Object Gateway operation log. Procedure Find where the logs are located: Syntax Example List the logs within the specified location: Syntax Example List the current buckets: Example Create a bucket: Syntax Example List the current logs: Syntax Example Collect the logs: Syntax Example | [
"debug_ms = 5 debug_mon = 20 debug_paxos = 20 debug_auth = 20",
"2022-05-12 12:37:04.278761 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:04.278792 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 min_last_epoch_clean 322 2022-05-12 12:37:04.278795 7f45a9afc700 10 mon.cephn2@0(leader).log v1010106 log 2022-05-12 12:37:04.278799 7f45a9afc700 10 mon.cephn2@0(leader).auth v2877 auth 2022-05-12 12:37:04.278811 7f45a9afc700 20 mon.cephn2@0(leader) e1 sync_trim_providers 2022-05-12 12:37:09.278914 7f45a9afc700 11 mon.cephn2@0(leader) e1 tick 2022-05-12 12:37:09.278949 7f45a9afc700 10 mon.cephn2@0(leader).pg v8126 v8126: 64 pgs: 64 active+clean; 60168 kB data, 172 MB used, 20285 MB / 20457 MB avail 2022-05-12 12:37:09.278975 7f45a9afc700 10 mon.cephn2@0(leader).paxosservice(pgmap 7511..8126) maybe_trim trim_to 7626 would only trim 115 < paxos_service_trim_min 250 2022-05-12 12:37:09.278982 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:09.278989 7f45a9afc700 5 mon.cephn2@0(leader).paxos(paxos active c 1028850..1029466) is_readable = 1 - now=2021-08-12 12:37:09.278990 lease_expire=0.000000 has v0 lc 1029466 . 2022-05-12 12:59:18.769963 7f45a92fb700 1 -- 192.168.0.112:6789/0 <== osd.1 192.168.0.114:6800/2801 5724 ==== pg_stats(0 pgs tid 3045 v 0) v1 ==== 124+0+0 (2380105412 0 0) 0x5d96300 con 0x4d5bf40 2022-05-12 12:59:18.770053 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.114:6800/2801 -- pg_stats_ack(0 pgs tid 3045) v1 -- ?+0 0x550ae00 con 0x4d5bf40 2022-05-12 12:59:32.916397 7f45a9afc700 0 mon.cephn2@0(leader).data_health(1) update_stats avail 53% total 1951 MB, used 780 MB, avail 1053 MB . 2022-05-12 13:01:05.256263 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.113:6800/2410 -- mon_subscribe_ack(300s) v1 -- ?+0 0x4f283c0 con 0x4d5b440",
"debug_ms = 5 debug_osd = 20",
"2022-05-12 11:27:53.869151 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.17.4:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x63baa00 con 0x578dee0 2022-05-12 11:27:53.869214 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.0.114:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x638f200 con 0x578e040 2022-05-12 11:27:53.870215 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.0.114:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x63c1a00 con 0x578e040 2022-05-12 11:27:53.870698 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.17.4:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x6313200 con 0x578dee0 . 2022-05-12 11:28:10.432313 7f5d6e71f700 5 osd.0 322 tick 2022-05-12 11:28:10.432375 7f5d6e71f700 20 osd.0 322 scrub_random_backoff lost coin flip, randomly backing off 2022-05-12 11:28:10.432381 7f5d6e71f700 10 osd.0 322 do_waiters -- start 2022-05-12 11:28:10.432383 7f5d6e71f700 10 osd.0 322 do_waiters -- finish",
"ceph tell TYPE . ID injectargs --debug- SUBSYSTEM VALUE [-- NAME VALUE ]",
"ceph tell osd.0 injectargs --debug-osd 0/5",
"ceph daemon NAME config show | less",
"ceph daemon osd.0 config show | less",
"[global] debug_ms = 1/5 [mon] debug_mon = 20 debug_paxos = 1/5 debug_auth = 2 [osd] debug_osd = 1/5 debug_monc = 5/20 [mds] debug_mds = 1",
"rotate 7 weekly size SIZE compress sharedscripts",
"rotate 7 weekly size 500 MB compress sharedscripts size 500M",
"crontab -e",
"30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph-d3bb5396-c404-11ee-9e65-002590fc2a2e >/dev/null 2>&1",
"logrotate -f",
"logrotate -f /etc/logrotate.d/ceph-12ab345c-1a2b-11ed-b736-fa163e4f6220",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 -rw-r--r--. 1 ceph ceph 412 Sep 28 09:26 opslog.log.1.gz",
"/usr/local/bin/s3cmd ls",
"/usr/local/bin/s3cmd mb s3:// NEW_BUCKET_NAME",
"/usr/local/bin/s3cmd mb s3://bucket1 Bucket `s3://bucket1` created",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 total 852 -rw-r--r--. 1 ceph ceph 920 Jun 29 02:17 opslog.log -rw-r--r--. 1 ceph ceph 412 Jun 28 09:26 opslog.log.1.gz",
"tail -f LOG_LOCATION /opslog.log",
"tail -f /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220/opslog.log {\"bucket\":\"\",\"time\":\"2022-09-29T06:17:03.133488Z\",\"time_local\":\"2022-09- 29T06:17:03.133488+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"list_buckets\",\"uri\":\"GET / HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":232, \"bytes_received\":0,\"object_size\":0,\"total_time\":9,\"user_agent\":\"\",\"referrer\": \"\",\"trans_id\":\"tx00000c80881a9acd2952a-006335385f-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false} {\"bucket\":\"cn1\",\"time\":\"2022-09-29T06:17:10.521156Z\",\"time_local\":\"2022-09- 29T06:17:10.521156+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"create_bucket\",\"uri\":\"PUT /cn1/ HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":0, \"bytes_received\":0,\"object_size\":0,\"total_time\":106,\"user_agent\":\"\", \"referrer\":\"\",\"trans_id\":\"tx0000058d60c593632c017-0063353866-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false}"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/troubleshooting_guide/configuring-logging |
Chapter 1. Overview | Chapter 1. Overview AMQ Python is a library for developing messaging applications. It enables you to write Python applications that send and receive AMQP messages. AMQ Python is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ Python is based on the Proton API from Apache Qpid . For detailed API documentation, see the AMQ Python API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 Distributed tracing based on the OpenTracing standard (RHEL 7 and 8) Important Distributed tracing in AMQ Clients is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 1.2. Supported standards and protocols AMQ Python supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms supported by Cyrus SASL , including ANONYMOUS, PLAIN, SCRAM, EXTERNAL, and GSSAPI (Kerberos) Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ Python supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ Python sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/overview |
Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization | Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization Red Hat Virtualization 4.3 How to configure a virtual machine in Red Hat Virtualization to use a dedicated GPU or vGPU. Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes how to use a host with a graphics processing unit (GPU) to run virtual machines in Red Hat Virtualization for graphics-intensive tasks and software that cannot run without a GPU. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/index |
Chapter 7. Known issues | Chapter 7. Known issues This section documents known issues found in this release of Red Hat Ceph Storage. 7.1. The Cephadm utility Adding or expanding iSCSI gateways in gwcli across the iSCSI daemons works as expected Previously, due to iSCSI daemons not being reconfigured automatically when a trusted IP list was updated in the specification file, adding or expanding iSCSI gateways in gwcli would fail due to the iscsi-gateway.cfg not matching across the iSCSI daemons. With this fix, you can expand the gateways and add it to the existing gateways with gwcli command. ( BZ#2099470 ) ceph orch ps does not display a version for monitoring stack daemons In cephadm , due to the version grabbing code currently being incompatible with the downstream monitoring stack containers, version grabbing fails for monitoring stack daemons, such as node-exporter , prometheus , and alertmanager . As a workaround, if the user needs to find the version, the daemons' container names include the version. ( BZ#2125382 ) | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.3_release_notes/known-issues |
Chapter 3. Migrating Data Grid configuration | Chapter 3. Migrating Data Grid configuration Find changes to Data Grid configuration that affect migration to Data Grid 8. 3.1. Data Grid cache configuration Data Grid 8 provides empty cache containers by default. When you start Data Grid, it instantiates a cache manager so you can create caches at runtime. However, in comparison with versions, there is no "default" cache out of the box. In Data Grid 8, caches that you create through the CacheContainerAdmin API are permanent to ensure that they survive cluster restarts. Permanent caches .administration() .withFlags(AdminFlag.PERMANENT) 1 .getOrCreateCache("myPermanentCache", "org.infinispan.DIST_SYNC"); 1 AdminFlag.PERMANENT is enabled by default to ensure that caches survive restarts. You do not need to set this flag when you create caches. However, you must separately add persistent storage to Data Grid for data to survive restarts, for example: ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore() .location("/tmp/myDataStore") .maxEntries(5000); Volatile caches .administration() .withFlags(AdminFlag.VOLATILE) 1 .getOrCreateCache("myTemporaryCache", "org.infinispan.DIST_SYNC"); 2 1 Sets the VOLATILE flag so caches are lost when Data Grid restarts. 2 Returns a cache named "myTemporaryCache" or creates one using the DIST_SYNC template. Data Grid 8 provides cache templates for server installations that you can use to create caches with recommended settings. You can get a list of available cache templates as follows: Use Tab auto-completion with the CLI: Use the REST API: 3.1.1. Cache encoding When you create remote caches you should configure the MediaType for keys and values. Configuring the MediaType guarantees the storage format for your data. To encode caches, you specify the MediaType in your configuration. Unless you have others requirements, you should use ProtoStream, which stores your data in a language-neutral, backwards compatible format. <encoding media-type="application/x-protostream"/> Distributed cache configuration with encoding <infinispan> <cache-container> <distributed-cache name="myCache" mode="SYNC"> <encoding media-type="application/x-protostream"/> ... </distributed-cache> </cache-container> </infinispan> If you do not encode remote caches, Data Grid Server logs the following message: In a future version, cache encoding will be required for operations where data conversion takes place; for example, cache indexing and searching the data container, remote task execution, reading and writing data in different formats from the Hot Rod and REST endpoints, as well as using remote filters, converters, and listeners. 3.1.2. Cache health status Data Grid 7.x includes a Health Check API that returns health status of the cluster as well as caches within it. Data Grid 8 also provides a Health API. For embedded and server installations, you can access the Health API via JMX with the following MBean: Data Grid Server also exposes the Health API through the REST endpoint and the Data Grid Console. Table 3.1. Health Status 7.x 8.x Description HEALTHY HEALTHY Indicates a cache is operating as expected. Rebalancing HEALTHY_REBALANCING Indicates a cache is in the rebalancing state but otherwise operating as expected. Unhealthy DEGRADED Indicates a cache is not operating as expected and possibly requires troubleshooting. N/A FAILED Added in 8.2 to indicate that a cache could not start with the supplied configuration. Additional resources Configuring Data Grid Caches 3.1.3. Changes to the Data Grid 8.1 configuration schema This topic lists changes to the Data Grid configuration schema between 8.0 and 8.1. New and modified elements and attributes stack adds support for inline JGroups stack definitions. stack.combine and stack.position attributes let you override and modify JGroups stack definitions. metrics lets you configure how Data Grid exports metrics that are compatible with the Eclipse MicroProfile Metrics API. context-initializer lets you specify a SerializationContextInitializer implementation that initializes a Protostream-based marshaller for user types. key-transformers lets you register transformers that convert custom keys to String for indexing with Lucene. statistics now defaults to "false". Deprecated elements and attributes The following elements and attributes are now deprecated: address-count attribute for the off-heap element. protocol attribute for the transaction element. duplicate-domains attribute for the jmx element. advanced-externalizer custom-interceptors state-transfer-executor transaction-protocol Removed elements and attributes The following elements and attributes were deprecated in a release and are now removed: deadlock-detection-spin compatibility write-skew versioning data-container eviction eviction-thread-policy 3.1.4. Changes to the Data Grid 8.2 configuration schema This topic lists changes to the Data Grid configuration schema between 8.1 and 8.2. Modified elements and attributes white-list changes to allow-list role is now a sub-element of roles for defined user roles and permissions for security authorization. context-initializer is updated for automatic SerializationContextInitializer registration. If your configuration does not contain context-initializer elements then the java.util.ServiceLoader mechanism automatically discovers all SerializationContextInitializer implementations on the classpath and loads them. Default value of the minOccurs attribute changes from 0 to 1 for the indexed-entity element. New elements and attributes property attribute added to the transport element that lets you pass name/value transport properties. cache-size and cache-timeout attributes added to the security element to configure the size and timeout for the Access Control List (ACL) cache. index-reader , index-writer , and index-merge child elements added to the indexing element. storage attribute added to the indexing element that specifies index storage options. path attribute added to the indexing element that specifies a directory when using file system storage for the index. bias-acquisition attribute added to the scattered-cache element that controls when nodes can acquire a bias on an entry. bias-lifespan attribute added to the scattered-cache element that specifies, in milliseconds, how long nodes can keep an acquired bias. merge-policy attribute added to the backups element that specifies an algorithm for resolving conflicts with cross-site replication. mode attribute added to the state-transfer child element for the backup . The mode attribute configures whether cross-site replication state transfer happens manually or automatically. INSERT_ABOVE , INSERT_BEFORE , and INSERT_BELOW attributes added to the stack.combine attribute for extending JGroups stacks with inheritance. Deprecated elements and attributes No elements or attributes are deprecated in Data Grid 8.2. Removed elements and attributes No elements or attributes are removed in Data Grid 8.2. 3.1.5. Changes to the Data Grid 8.3 configuration schema This topic lists changes to the Data Grid configuration schema between 8.2 and 8.3. Schema changes urn:infinispan:config:store:soft-index namespace is no longer available. Modified elements and attributes file-store element in the urn:infinispan:config namespace defaults to using soft-index file cache stores. single-file-store element is included in the urn:infinispan:config namespace but is now deprecated. New elements and attributes index and data elements are now available to configure how Data Grid stores indexes and data for file-based cache stores with the file-store element. open-files-limit and compaction-threshold attributes for the file-store element. cluster attribute added to the remote-sites and remote-site elements that lets you define global cluster names for cross-site communication. Note Global cluster names that you specify with the cluster attribute must be the same at all sites. accurate-size attribute added to the metrics element to enable calculations of the data set with the currentNumberOfEntries statistic. Important As of Data Grid 8.3 the currentNumberOfEntries statistic returns a value of -1 by default because it is an expensive operation to perform. touch attribute added to the expiration element that controls how timestamps get updated for entries in clustered caches with maximum idle expiration. The default value is SYNC and the attribute applies only to caches that use synchronous replication. Timestamps are updated asynchronously for caches that use asynchronous replication. lifespan attribute added to the strong-counter for attaching expiration values, in milliseconds. The default value is -1 which means strong consistent counters never expire. Note The lifespan attribute for strong counters is currently available as a Technology Preview. Deprecated elements and attributes The following elements and attributes are now deprecated: single-file-store element. max-entries and path attributes for the file-store element. Removed elements and attributes The following elements and attributes are no longer available in the Data Grid schema: remote-command-executor attribute for the transport element. capacity attribute for the distributed-cache element. 3.1.6. Changes to the Data Grid 8.4 configuration schema This topic lists changes to the Data Grid configuration schema between 8.3 and 8.4. Schema changes New elements and attributes default-max-results attribute added to the query element that lets you limits the number of results returned by a query. Applies to indexed, non-indexed, and hybrid queries. startup-mode attribute that lets you define which operation should Data Grid perform when the cache starts. The options are purge , reindex , auto or none . The default value is none . raft-members attribute that lets you define a list of raft members separated by space. Deprecated elements and attributes The following elements and attributes are now deprecated: scattered-cache element is now deprecated Removed elements and attributes The following elements and attributes are no longer available in the Data Grid schema: fetch-state store property is no longer available. You can remove the attribute from your xml configuration. 3.1.7. Changes to the Data Grid 8.5 configuration schema This topic lists changes to the Data Grid configuration schema between 8.4 and 8.5. New elements and attributes tracing element added to cache-container that lets you configure tracing and the traces can be collected by an OpenTelemetry collector. group-only-mapping attribute added to authorization . Use this attribute to specify whether principal-to-role mapping applies only to group principals or also to user principals. The default value true applies principal-to-role mapping only to group principals. Set the value to false to apply the mapping to both the principal types. description attribute added to roles that lets you defines the description of the role. schema-compatibility attribute added to serialization that lets you specify the compatibility validation that is performed when updating schemas. unclean-shutdown-action attribute added to global-state that lets you define the action taken when a dangling lock file is found in the persistent global state, signifying an unclean shutdown of the node. The default value is FAIL . index-sharding element added to indexing . Sharding is the process of splitting index data into multiple smaller indexes called shards. Sharding improves performance when dealing with large amounts of data. Sharding is disabled by default. indexing-mode element added to indexing that lets you define how cache operations are propagated to the indexes. By default, all the changes to the cache are immediately applied to the indexes. tracing element added to indexing that lets you configure tracing and the traces can be collected by an OpenTelemetry collector. aliases attribute added to cache that lets you define zero or more alias names for a cache. statistics attribute added to cache that defines whether the cache should collect statistics. Keep statistics collection disabled for optimal performance. Deprecated elements and attributes There are no deprecations in this release. Removed elements and attributes The following elements and attributes are no longer available in the Data Grid schema: scattered-cache element has been removed. property element has been removed from cache . auto-config element has been removed from indexing . statistics-available attribute has been removed from indexing . connection-interval attribute as been removed from persistence . 3.2. Eviction configuration Data Grid 8 simplifies eviction configuration in comparison with versions. However, eviction configuration has undergone numerous changes across different Data Grid versions, which means migration might not be straightforward. Note As of Data Grid 7.2, the memory element replaces the eviction element in the configuration. This section refers to eviction configuration with the memory element only. For information on migrating configuration that uses the eviction element, refer to the Data Grid 7.2 documentation. 3.2.1. Storage types Data Grid lets you control how to store entries in memory, with the following options: Store objects in JVM heap memory. Store bytes in native memory (off-heap). Store bytes in JVM heap memory. Changes in Data Grid 8 In 7.x versions, and 8.0, you use object , binary , and off-heap elements to configure the storage type. Starting with Data Grid 8.1, you use a storage attribute to store objects in JVM heap memory or as bytes in off-heap memory. To store bytes in JVM heap memory, you use the encoding element to specify a binary storage format for your data. Data Grid 7.x Data Grid 8 <memory><object /></memory> <memory /> <memory><off-heap /></memory> <memory storage="OFF_HEAP" /> <memory><binary /></memory> <encoding media-type="... " /> Object storage in Data Grid 8 By default, Data Grid 8.1 uses object storage (JVM heap): <distributed-cache> <memory /> </distributed-cache> You can also configure storage="HEAP" explicitly to store data as objects in JVM heap memory: <distributed-cache> <memory storage="HEAP" /> </distributed-cache> Off-heap storage in Data Grid 8 Set "OFF_HEAP" as the value of the storage attribute to store data as bytes in native memory: <distributed-cache> <memory storage="OFF_HEAP" /> </distributed-cache> Off-heap address count In versions, the address-count attribute for offheap lets you specify the number of pointers that are available in the hash map to avoid collisions. With Data Grid 8.1, address-count is no longer used and off-heap memory is dynamically re-sized to avoid collisions. Binary storage in Data Grid 8 Specify a binary storage format for cache entries with the encoding element: <distributed-cache> <!--Configure MediaType for entries with binary formats.--> <encoding media-type="application/x-protostream"/> <memory ... /> </distributed-cache> Note As a result of this change, Data Grid no longer stores primitives and String mixed with byte[] , but stores only byte[] . 3.2.2. Eviction threshold Eviction lets Data Grid control the size of the data container by removing entries when the container becomes larger than a configured threshold. In Data Grid 7.x and 8.0, you specify two eviction types that define the maximum limit for entries in the cache: COUNT measures the number of entries in the cache. MEMORY measures the amount of memory that all entries in the cache take up. Depending on the configuration you set, when either the count or the total amount of memory exceeds the maximum, Data Grid removes unused entries. Data Grid 7.x and 8.0 also use the size attribute that defines the size of the data container as a long. Depending on the storage type you configure, eviction occurs either when the number of entries or amount of memory exceeds the value of the size attribute. With Data Grid 8.1, the size attribute is deprecated along with COUNT and MEMORY . Instead, you configure the maximum size of the data container in one of two ways: Total number of entries with the max-count attribute. Maximum amount of memory, in bytes, with the max-size attribute. Eviction based on total number of entries <distributed-cache> <memory max-count="..." /> </distributed-cache> Eviction based on maximum amount of memory <distributed-cache> <memory max-size="..." /> </distributed-cache> 3.2.3. Eviction strategies Eviction strategies control how Data Grid performs eviction. Data Grid 7.x and 8.0 let you set one of the following eviction strategies with the strategy attribute: Strategy Description NONE Data Grid does not evict entries. This is the default setting unless you configure eviction. REMOVE Data Grid removes entries from memory so that the cache does not exceed the configured size. This is the default setting when you configure eviction. MANUAL Data Grid does not perform eviction. Eviction takes place manually by invoking the evict() method from the Cache API. EXCEPTION Data Grid does not write new entries to the cache if doing so would exceed the configured size. Instead of writing new entries to the cache, Data Grid throws a ContainerFullException . With Data Grid 8.1, you can use the same strategies as in versions. However, the strategy attribute is replaced with the when-full attribute. <distributed-cache> <memory when-full="<eviction_strategy>" /> </distributed-cache> Eviction algorithms With Data Grid 7.2, the ability to configure eviction algorithms was deprecated along with the Low Inter-Reference Recency Set (LIRS). From version 7.2 onwards, Data Grid includes the Caffeine caching library that implements a variation of the Least Frequently Used (LFU) cache replacement algorithm known as TinyLFU. For off-heap storage, Data Grid uses a custom implementation of the Least Recently Used (LRU) algorithm. 3.2.4. Eviction configuration comparison Compare eviction configuration between different Data Grid versions. Object storage and evict on number of entries 7.2 to 8.0 <memory> <object size="1000000" eviction="COUNT" strategy="REMOVE"/> </memory> 8.1 <memory max-count="1MB" when-full="REMOVE"/> Object storage and evict on amount of memory 7.2 to 8.0 <memory> <object size="1000000" eviction="MEMORY" strategy="MANUAL"/> </memory> 8.1 <memory max-size="1MB" when-full="MANUAL"/> Binary storage and evict on number of entries 7.2 to 8.0 <memory> <binary size="500000000" eviction="MEMORY" strategy="EXCEPTION"/> </memory> 8.1 <cache> <encoding media-type="application/x-protostream"/> <memory max-size="500 MB" when-full="EXCEPTION"/> </cache> Binary storage and evict on amount of memory 7.2 to 8.0 <memory> <binary size="500000000" eviction="COUNT" strategy="MANUAL"/> </memory> 8.1 <memory max-count="500 MB" when-full="MANUAL"/> Off-heap storage and evict on number of entries 7.2 to 8.0 <memory> <off-heap size="10000000" eviction="COUNT"/> </memory> 8.1 <memory storage="OFF_HEAP" max-count="10MB"/> Off-heap storage and evict on amount of memory 7.2 to 8.0 <memory> <off-heap size="1000000000" eviction="MEMORY"/> </memory> 8.1 <memory storage="OFF_HEAP" max-size="1GB"/> Additional resources Configuring Data Grid caches New eviction policy TinyLFU since RHDG 7.3 (Red Hat Knowledgebase) Product Documentation for Data Grid 7.2 3.3. Expiration configuration Expiration removes entries from caches based on their lifespan or maximum idle time. When migrating your configuration from Data Grid 7.x to 8, there are no changes that you need to make for expiration. The configuration remains the same: Lifespan expiration <expiration lifespan="1000" /> Max-idle expiration <expiration max-idle="1000" interval="120000" /> For Data Grid 7.2 and earlier, using max-idle with clustered caches had technical limitations that resulted in performance degradation. As of Data Grid 7.3, Data Grid sends touch commands to all owners in clustered caches when client read entries that have max-idle expiration values. This ensures that the entries have the same relative access time across the cluster. Data Grid 8 sends the same touch commands for max-idle expiration across clusters. However there are some technical considerations you should take into account before you start using max-idle . Refer to Configuring Data Grid caches to read more about how expiration works and to review how the touch commands affect performance with clustered caches. Additional resources Configuring Data Grid caches 3.4. Persistent cache stores In comparison with Data Grid 7.x, there are some changes to cache store configuration in Data Grid 8. Persistence SPI Data Grid 8.1 introduces the NonBlockingStore interface for cache stores. The NonBlockingStore SPI exposes methods that must never block the invoking thread. Cache stores that connect Data Grid to persistent data sources implement the NonBlockingStore interface. For custom cache store implementations that use blocking operations, Data Grid provides a BlockingManager utility class to handle those operations. The introduction of the NonBlockingStore interface deprecates the following interfaces: CacheLoader CacheWriter AdvancedCacheLoader AdvancedCacheWriter Custom cache stores Data Grid 8 lets you configure custom cache stores with the store element as in versions. The following changes apply: The singleton attribute is removed. Use shared=true instead. The segmented attribute is added and defaults to true . Segmented cache stores As of Data Grid 8, cache store configuration defaults to segmented="true" and applies to the following cache store elements: store file-store string-keyed-jdbc-store jpa-store remote-store rocksdb-store soft-index-file-store Note As of Data Grid 8.3, file-store element in cache configuration creates a soft index file-based store. For more information see File-based cache stores default to soft index . Single file cache stores The relative-to attribute for Single File cache stores is removed in Data Grid 8. If your cache store configuration includes this attribute, Data Grid ignores it and uses only the path attribute to configure store location. JDBC cache stores JDBC cache stores must include an xlmns namespace declaration, which was not required in some Data Grid 7.x versions. <persistence> <string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:15.0" shared="true"> ... </persistence> JDBC connection factories Data Grid 7.x JDBC cache stores can use the following ConnectionFactory implementations to obtain a database connection: ManagedConnectionFactory SimpleConnectionFactory PooledConnectionFactory Data Grid 8 now use connections factories based on Agroal, which is the same as Red Hat JBoss EAP, to connect to databases. It is no longer possible to use c3p0.properties and hikari.properties files. Note As of Data Grid 8.3 JDBC connection factories are part of the org.infinispan.persistence.jdbc.common.configuration package. Segmentation JDBC String-Based cache store configuration that enables segmentation, which is now the default, must include the segmentColumnName and segmentColumnType parameters, as in the following programmatic examples: MySQL Example builder.table() .tableNamePrefix("ISPN") .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)") .dataColumnName("DATA_COLUMN").dataColumnType("VARBINARY(1000)") .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT") .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER") PostgreSQL Example builder.table() .tableNamePrefix("ISPN") .idColumnName("ID_COLUMN").idColumnType("VARCHAR(255)") .dataColumnName("DATA_COLUMN").dataColumnType("BYTEA") .timestampColumnName("TIMESTAMP_COLUMN").timestampColumnType("BIGINT") .segmentColumnName("SEGMENT_COLUMN").segmentColumnType("INTEGER"); Write-behind The thread-pool-size attribute for Write-Behind mode is removed in Data Grid 8. Removed cache stores and loaders Data Grid 7.3 deprecates the following cache stores and loaders that are no longer available in Data Grid 8: Cassandra Cache Store REST Cache Store LevelDB Cache Store CLI Cache Loader Cache store migrator Cache stores in versions of Data Grid store data in a binary format that is not compatible with Data Grid 8. Use the StoreMigrator utility to migrate data in persistent cache stores to Data Grid 8. 3.4.1. File-based cache stores default to soft index Including file-store persistence in cache configuration now creates a soft index file-based cache store, SoftIndexFileStore , instead of a single-file cache store, SingleFileStore . In Data Grid 8.2 and earlier, SingleFileStore was the default for file-based cache stores. If you are migrating or upgrading to Data Grid 8.3, any file-store configuration is automatically converted to a SoftIndexFileStore at server startup. When your configuration is converted to SoftIndexFileStore , it is not possible to revert back to SingleFileStore without modifying the configuration to ensure compatibility with the new store. 3.4.1.1. Declarative configuration Data Grid 8.2 and earlier <persistence> <soft-index-file-store xmlns="urn:infinispan:config:soft-index:12.1"> <index path="testCache/index" /> <data path="testCache/data" /> </soft-index-file-store> </persistence> Data Grid 8.3 and later <persistence> <file-store> <index path="testCache/index" /> <data path="testCache/data" /> </file-store> </persistence> 3.4.1.2. Programmatic configuration Data Grid 8.2 and earlier ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addStore(SoftIndexFileStoreConfigurationBuilder.class) .indexLocation("testCache/index"); .dataLocation("testCache/data") Data Grid 8.3 and later ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSoftIndexFileStore() .indexLocation("testCache/index") .dataLocation("testCache/data"); 3.4.1.3. Using single file cache stores with Data Grid 8.3 You can configure SingleFileStore cache stores with Data Grid 8.3 or later but Red Hat does not recommend doing so. You should use SoftIndexFileStore cache stores because they offer better scalability. Declarative <persistence passivation="false"> <single-file-store shared="false" preload="true" fetch-state="true" read-only="false"/> </persistence> Programmatic ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore(); 3.5. Data Grid cluster transport Data Grid uses JGroups technology to handle communication between clustered nodes. JGroups stack configuration elements and attributes have not significantly changed from Data Grid versions. As in versions, Data Grid provides preconfigured JGroups stacks that you can use as a starting point for building custom cluster transport configuration optimized for your network requirements. Likewise, Data Grid provides the ability to add JGroups stacks defined in external XML files to your infinispan.xml . Data Grid 8 has brought usability improvements to make cluster transport configuration easier: Inline stacks let you configure JGroups stacks directly within infinispan.xml using the jgroups element. Declare JGroups schemas within infinispan.xml . Preconfigured JGroups stacks for UDP and TCP protocols. Inheritance attributes that let you extend JGroups stacks to adjust specific protocols and properties. <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd" 1 xmlns="urn:infinispan:config:15.0" xmlns:server="urn:infinispan:server:15.0"> <jgroups> 2 <stack name="xsite" extends="udp"> 3 <relay.RELAY2 site="LON" xmlns="urn:org:jgroups"/> <remote-sites default-stack="tcp"> <remote-site name="LON"/> <remote-site name="NYC"/> </remote-sites> </stack> </jgroups> <cache-container ...> ... </infinispan> 1 Declares the JGroups 4.2 schema within infinispan.xml . 2 Adds a JGroups element to contain custom stack definitions. 3 Defines a JGroups protocol stack for cross-site replication. 3.5.1. Transport security As in versions, Data Grid 8 uses the JGroups SYM_ENCRYPT and ASYM_ENCRYPT protocols to encrypt cluster communication. As of Data Grid you can also use a security realm that includes a keystore and trust store as a TLS server identity to secure cluster transport, for example: <cache-container> <transport server:security-realm="tls-transport"/> </cache-container> Node authentication In Data Grid 7.x, the JGroups SASL protocol enables nodes to authenticate against security realms in both embedded and remote server installations. As of Data Grid 8, it is not possible to configure node authentication against security realms. Likewise Data Grid 8 does not recommend using the JGroups AUTH protocol for authenticating clustered nodes. However, with embedded Data Grid installations, JGroups cluster transport includes a SASL configuration as part of the jgroups element. As in versions, the SASL configuration relies on JAAS notions, such as CallbackHandlers , to obtain certain information necessary for node authentication. 3.5.2. Retransmission requests Data Grid 8.2 changes the configuration for retransmission requests for the UNICAST3 and NAKACK2 protocols in the default JGroups stacks, as follows: The value of the xmit_interval property is increased from 100 milliseconds to 200 milliseconds. The max_xmit_req_size property now sets a maximum of 500 messages per re-transmission request, instead of a maximum of 8500 with UDP or 64000 with TCP. As part of your migration to Data Grid 8 you should adapt any custom JGroups stack configuration to use these recommended settings. Additional resources Data Grid Server Guide Using Embedded Data Grid Caches Data Grid Security Guide 3.6. Data Grid authorization Data Grid uses role-based access control (RBAC) to restrict access to data and cluster encryption to secure communication between nodes. Roles and Permissions Data Grid 8.2 provides a set of default users and permissions that you can use for RBAC, with the following changes: ClusterRoleMapper is the default mechanism that Data Grid uses to associate security principals to authorization roles. A new MONITOR permission allows user access to Data Grid statistics. A new CREATE permission that users need to create and delete resources such as caches and counters. Note CREATE replaces the ___schema_manager and \___script_manager roles that users required to create and remove Protobuf schema and server scripts in Data Grid 8.1 and earlier. When migrating to Data Grid 8.2, you should assign the deployer role to users who had the ___schema_manager and \___script_manager roles in Data Grid 8.1 or earlier. Use the command line interface (CLI) as follows: [//containers/default]> user roles grant --roles=deployer <user> cache manager permissions Table 3.2. Data Grid 8.1 Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a cache manager. LIFECYCLE stop Stops the cache manager. ALL - Includes all cache manager permissions. Table 3.3. Data Grid 8.2 Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a cache manager. LIFECYCLE stop Stops the cache manager. CREATE createCache , removeCache Create and remove container resources such as caches, counters, schemas, and scripts. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache manager permissions. Cache permissions Table 3.4. Data Grid 8.1 Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Table 3.5. Data Grid 8.2 Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Cache manager authorization As of Data Grid 8.2, you can include the authorization element in the cache-container security configuration as follows: <infinispan> <cache-container name="secured"> <security> <authorization/> 1 </security> </cache-container> </infinispan> 1 Enables security authorization for the cache manager with default roles and permissions. You can also define global authorization configuration as follows: <infinispan> <cache-container default-cache="secured" name="secured"> <security> <authorization> 1 <identity-role-mapper /> 2 <role name="admin" permissions="ALL" /> 3 <role name="reader" permissions="READ" /> <role name="writer" permissions="WRITE" /> <role name="supervisor" permissions="READ WRITE EXEC"/> </authorization> </security> </cache-container> </infinispan> 1 Requires user permission to control the cache manager lifecycle. 2 Specifies an implementation of PrincipalRoleMapper that maps Principals to roles. 3 Defines a set of roles and associated permissions. Implicit cache authorization Data Grid 8 improves usability by allowing caches to inherit authorization configuration from the cache-container so you do not need to explicitly configure roles and permissions for each cache. <local-cache name="secured"> <security> <authorization/> 1 </security> </local-cache> 1 Uses roles and permissions defined in the cache container. As of Data Grid 8.2, including the authorization element in the configuration uses the default roles and permissions to restrict access to that cache unless you define a set of custom global permissions. Additional resources Data Grid Security Guide | [
".administration() .withFlags(AdminFlag.PERMANENT) 1 .getOrCreateCache(\"myPermanentCache\", \"org.infinispan.DIST_SYNC\");",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore() .location(\"/tmp/myDataStore\") .maxEntries(5000);",
".administration() .withFlags(AdminFlag.VOLATILE) 1 .getOrCreateCache(\"myTemporaryCache\", \"org.infinispan.DIST_SYNC\"); 2",
"[//containers/default]> create cache --template=",
"GET 127.0.0.1:11222/rest/v2/cache-managers/default/cache-configs/templates",
"<infinispan> <cache-container> <distributed-cache name=\"myCache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> </distributed-cache> </cache-container> </infinispan>",
"WARN (main) [org.infinispan.encoding.impl.StorageConfigurationManager] ISPN000599: Configuration for cache 'mycache' does not define the encoding for keys or values. If you use operations that require data conversion or queries, you should configure the cache with a specific MediaType for keys or values.",
"org.infinispan:type=CacheManager,name=\"default\",component=CacheContainerHealth",
"<distributed-cache> <memory /> </distributed-cache>",
"<distributed-cache> <memory storage=\"HEAP\" /> </distributed-cache>",
"<distributed-cache> <memory storage=\"OFF_HEAP\" /> </distributed-cache>",
"<distributed-cache> <!--Configure MediaType for entries with binary formats.--> <encoding media-type=\"application/x-protostream\"/> <memory ... /> </distributed-cache>",
"<distributed-cache> <memory max-count=\"...\" /> </distributed-cache>",
"<distributed-cache> <memory max-size=\"...\" /> </distributed-cache>",
"<distributed-cache> <memory when-full=\"<eviction_strategy>\" /> </distributed-cache>",
"<memory> <object size=\"1000000\" eviction=\"COUNT\" strategy=\"REMOVE\"/> </memory>",
"<memory max-count=\"1MB\" when-full=\"REMOVE\"/>",
"<memory> <object size=\"1000000\" eviction=\"MEMORY\" strategy=\"MANUAL\"/> </memory>",
"<memory max-size=\"1MB\" when-full=\"MANUAL\"/>",
"<memory> <binary size=\"500000000\" eviction=\"MEMORY\" strategy=\"EXCEPTION\"/> </memory>",
"<cache> <encoding media-type=\"application/x-protostream\"/> <memory max-size=\"500 MB\" when-full=\"EXCEPTION\"/> </cache>",
"<memory> <binary size=\"500000000\" eviction=\"COUNT\" strategy=\"MANUAL\"/> </memory>",
"<memory max-count=\"500 MB\" when-full=\"MANUAL\"/>",
"<memory> <off-heap size=\"10000000\" eviction=\"COUNT\"/> </memory>",
"<memory storage=\"OFF_HEAP\" max-count=\"10MB\"/>",
"<memory> <off-heap size=\"1000000000\" eviction=\"MEMORY\"/> </memory>",
"<memory storage=\"OFF_HEAP\" max-size=\"1GB\"/>",
"<expiration lifespan=\"1000\" />",
"<expiration max-idle=\"1000\" interval=\"120000\" />",
"<persistence> <string-keyed-jdbc-store xmlns=\"urn:infinispan:config:store:jdbc:15.0\" shared=\"true\"> </persistence>",
"builder.table() .tableNamePrefix(\"ISPN\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"VARBINARY(1000)\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .segmentColumnName(\"SEGMENT_COLUMN\").segmentColumnType(\"INTEGER\")",
"builder.table() .tableNamePrefix(\"ISPN\") .idColumnName(\"ID_COLUMN\").idColumnType(\"VARCHAR(255)\") .dataColumnName(\"DATA_COLUMN\").dataColumnType(\"BYTEA\") .timestampColumnName(\"TIMESTAMP_COLUMN\").timestampColumnType(\"BIGINT\") .segmentColumnName(\"SEGMENT_COLUMN\").segmentColumnType(\"INTEGER\");",
"<persistence> <soft-index-file-store xmlns=\"urn:infinispan:config:soft-index:12.1\"> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> </soft-index-file-store> </persistence>",
"<persistence> <file-store> <index path=\"testCache/index\" /> <data path=\"testCache/data\" /> </file-store> </persistence>",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addStore(SoftIndexFileStoreConfigurationBuilder.class) .indexLocation(\"testCache/index\"); .dataLocation(\"testCache/data\")",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSoftIndexFileStore() .indexLocation(\"testCache/index\") .dataLocation(\"testCache/data\");",
"<persistence passivation=\"false\"> <single-file-store shared=\"false\" preload=\"true\" fetch-state=\"true\" read-only=\"false\"/> </persistence>",
"ConfigurationBuilder b = new ConfigurationBuilder(); b.persistence() .addSingleFileStore();",
"<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd\" 1 xmlns=\"urn:infinispan:config:15.0\" xmlns:server=\"urn:infinispan:server:15.0\"> <jgroups> 2 <stack name=\"xsite\" extends=\"udp\"> 3 <relay.RELAY2 site=\"LON\" xmlns=\"urn:org:jgroups\"/> <remote-sites default-stack=\"tcp\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> <cache-container ...> </infinispan>",
"<cache-container> <transport server:security-realm=\"tls-transport\"/> </cache-container>",
"[//containers/default]> user roles grant --roles=deployer <user>",
"<infinispan> <cache-container name=\"secured\"> <security> <authorization/> 1 </security> </cache-container> </infinispan>",
"<infinispan> <cache-container default-cache=\"secured\" name=\"secured\"> <security> <authorization> 1 <identity-role-mapper /> 2 <role name=\"admin\" permissions=\"ALL\" /> 3 <role name=\"reader\" permissions=\"READ\" /> <role name=\"writer\" permissions=\"WRITE\" /> <role name=\"supervisor\" permissions=\"READ WRITE EXEC\"/> </authorization> </security> </cache-container> </infinispan>",
"<local-cache name=\"secured\"> <security> <authorization/> 1 </security> </local-cache>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/migrating_to_data_grid_8/cache-migration |
Installing Red Hat Developer Hub on Google Kubernetes Engine | Installing Red Hat Developer Hub on Google Kubernetes Engine Red Hat Developer Hub 1.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_google_kubernetes_engine/index |
19.4. volume_key References | 19.4. volume_key References More information on volume_key can be found: in the readme file located at /usr/share/doc/volume_key-*/README on volume_key 's manpage using man volume_key online at http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/volume_key-documentation |
Chapter 78. KafkaClientAuthenticationOAuth schema reference | Chapter 78. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationOAuth schema properties To configure OAuth client authentication, set the type property to oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token Username and password TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token Username and password OAuth username and password configuration uses the OAuth Resource Owner Password Grant mechanism. The mechanism is deprecated, and is only supported to enable integration in environments where client credentials (ID and secret) cannot be used. You might need to use user accounts if your access management system does not support another approach or user accounts are required for authentication. A typical approach is to create a special user account in your authorization server that represents your client application. You then give the account a long randomly generated password and a very limited set of permissions. For example, the account can only connect to your Kafka cluster, but is not allowed to use any other services or login to the user interface. Consider using a refresh token mechanism first. You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID, username and the password used in authentication. The OAuth client will connect to the OAuth server, authenticate using the username, the password, the client ID, and optionally even the client secret to obtain an access token which it will use to authenticate with the Kafka broker. In the passwordSecret property, specify a link to a Secret containing the password. Normally, you also have to configure a clientId using a public OAuth client. If you are using a confidential OAuth client, you also have to configure a clientSecret . An example of OAuth client authentication using username and a password with a public client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id An example of OAuth client authentication using a username and a password with a confidential client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resource. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true 78.1. KafkaClientAuthenticationOAuth schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Property type Description accessToken GenericSecretSource Link to OpenShift Secret containing the access token which was obtained from the authorization server. accessTokenIsJwt boolean Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . audience string OAuth audience to use when authenticating against the authorization server. Some authorization servers require the audience to be explicitly set. The possible values depend on how the authorization server is configured. By default, audience is not specified when performing the token endpoint request. clientId string OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. clientSecret GenericSecretSource Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. connectTimeoutSeconds integer The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. disableTlsHostnameVerification boolean Enable or disable TLS hostname verification. Default value is false . enableMetrics boolean Enable or disable OAuth metrics. Default value is false . httpRetries integer The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. httpRetryPauseMs integer The pause to take before retrying a failed HTTP request. If not set, the default is to not pause at all but to immediately repeat a request. includeAcceptHeader boolean Whether the Accept header should be set in requests to the authorization servers. The default value is true . maxTokenExpirySeconds integer Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. passwordSecret PasswordSecretSource Reference to the Secret which holds the password. readTimeoutSeconds integer The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. refreshToken GenericSecretSource Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. scope string OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. tlsTrustedCertificates CertSecretSource array Trusted certificates for TLS connection to the OAuth server. tokenEndpointUri string Authorization server token endpoint URI. type string Must be oauth . username string Username used for the authentication. | [
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token",
"authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt",
"authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaclientauthenticationoauth-reference |
Chapter 10. Virtual machine templates | Chapter 10. Virtual machine templates 10.1. Creating virtual machine templates 10.1.1. About virtual machine templates Preconfigured Red Hat virtual machine templates are listed in the Virtualization Templates page. These templates are available for different versions of Red Hat Enterprise Linux, Fedora, Microsoft Windows 10, and Microsoft Windows Servers. Each Red Hat virtual machine template is preconfigured with the operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). The Templates page displays four types of virtual machine templates: Red Hat Supported templates are fully supported by Red Hat. User Supported templates are Red Hat Supported templates that were cloned and created by users. Red Hat Provided templates have limited support from Red Hat. User Provided templates are Red Hat Provided templates that were cloned and created by users. You can use the filters in the template Catalog to sort the templates by attributes such as boot source availability, operating system, and workload. You cannot edit or delete a Red Hat Supported or Red Hat Provided template. You can clone the template, save it as a custom virtual machine template, and then edit it. You can also create a custom virtual machine template by editing a YAML file example. Important Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles. 10.1.2. About virtual machines and boot sources Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications. Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. 10.1.3. Creating a virtual machine template in the web console You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console. Procedure In the web console, click Virtualization Templates in the side menu. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. 10.1.4. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Source available on the Templates page. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) URL (creates PVC) Clone (creates PVC) Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the options menu beside a template and select Edit boot source . Click Add disk . In the Add disk window, select Use this disk as a boot source . Enter the disk name and select a Source , for example, Blank (creates PVC) or Use an existing PVC . Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Select a Type , for example, Disk or CD-ROM . Optional: Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs. Note Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the default storage class. Optional: Clear Apply optimized StorageProfile settings to edit the access mode or volume mode. Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed on the Catalog page. You can use this template to create a virtual machine. 10.1.4.1. Virtual machine template fields for adding a boot source The following table describes the fields for Add boot source to template window. This window displays when you click Add source for a virtual machine template on the Virtualization Templates page. Name Parameter Description Boot source type Upload local file (creates PVC) Upload a file from your local device. Supported file types include gz, xz, tar, and qcow2. URL (creates PVC) Import content from an image available from an HTTP or HTTPS endpoint. Obtain the download link URL from the web page where the image download is available and enter that URL link in the Import URL field. Example: For a Red Hat Enterprise Linux image, log on to the Red Hat Customer Portal, access the image download page, and copy the download link URL for the KVM guest image. PVC (creates PVC) Use a PVC that is already available in the cluster and clone it. Registry (creates PVC) Specify the bootable operating system container that is located in a registry and accessible from the cluster. Example: kubevirt/cirros-registry-dis-demo. Source provider Optional field. Add descriptive text about the source for the template or the name of the user who created the template. Example: Red Hat. Advanced Storage settings StorageClass The storage class that is used to create the disk. Access mode Access mode of the persistent volume. Supported access modes are Single User (RWO) , Shared Access (RWX) , Read Only (ROX) . If Single User (RWO) is selected, the disk can be mounted as read/write by a single node. If Shared Access (RWX) is selected, the disk can be mounted as read-write by many nodes. The kubevirt-storage-class-defaults config map provides access mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. Note Shared Access (RWX) is required for some features, such as live migration of virtual machines between nodes. Volume mode Defines whether the persistent volume uses a formatted file system or raw block state. Supported modes are Block and Filesystem . The kubevirt-storage-class-defaults config map provides volume mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. 10.1.5. Additional resources Creating and using boot sources Customizing the storage profile 10.2. Editing virtual machine templates You can edit a virtual machine template in the web console. Note You cannot edit a template provided by the Red Hat Virtualization Operator. If you clone the template, you can edit it. 10.2.1. Editing a virtual machine template in the web console Edit select values of a virtual machine template in the web console by clicking the pencil icon to the relevant field. Other values can be edited using the CLI. You can edit labels and annotations for any templates, including those provided by Red Hat. Other fields are editable for user-customized templates only. Procedure Click Virtualization Templates from the side menu. Optional: Use the Filter drop-down menu to sort the list of virtual machine templates by attributes such as status, template, node, or operating system (OS). Select a virtual machine template to open the Template details page. Click any field that has the pencil icon, which indicates that the field is editable. For example, click the current Boot mode setting, such as BIOS or UEFI, to open the Boot mode window and select an option from the list. Make the relevant changes and click Save . Editing a virtual machine template will not affect virtual machines already created from that template. 10.2.1.1. Virtual machine template fields The following table lists the virtual machine template fields that you can edit in the OpenShift Container Platform web console: Table 10.1. Virtual machine template fields Tab Fields or functionality Details Labels Annotations Display name Description Workload profile CPU/Memory Boot mode GPU devices Host devices YAML View, edit, or download the custom resource. Scheduling Node selector Tolerations Affinity rules Dedicated resources Eviction strategy Descheduler setting Network Interfaces Add, edit, or delete a network interface. Disks Add, edit, or delete a disk. Scripts cloud-init settings Parameters (optional) Virtual machine name cloud-user password 10.2.1.2. Adding a network interface to a virtual machine template Use this procedure to add a network interface to a virtual machine template. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . 10.2.1.3. Adding a virtual disk to a virtual machine template Use this procedure to add a virtual disk to a virtual machine template. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Disks tab and then click Add disk . In the Add disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . 10.2.1.4. Editing CD-ROMs for Templates Use the following procedure to edit CD-ROMs for virtual machine templates. Procedure Click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 10.3. Enabling dedicated resources for virtual machine templates Virtual machines can have resources of a node, such as CPU, dedicated to them to improve performance. 10.3.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 10.3.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. 10.3.3. Enabling dedicated resources for a virtual machine template You enable dedicated resources for a virtual machine template in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Select a virtual machine template to open the Template details page. On the Scheduling tab, click the pencil icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 10.4. Deploying a virtual machine template to a custom namespace Red Hat provides preconfigured virtual machine templates that are installed in the openshift namespace. The ssp-operator deploys virtual machine templates to the openshift namespace by default. Templates in the openshift namespace are publicly available to all users. These templates are listed on the Virtualization Templates page for different operating systems. 10.4.1. Creating a custom namespace for templates You can create a custom namespace that is used to deploy virtual machine templates for use by anyone who has permissions to access those templates. To add templates to a custom namespace, edit the HyperConverged custom resource (CR), add commonTemplatesNamespace to the spec, and specify the custom namespace for the virtual machine templates. After the HyperConverged CR is modified, the ssp-operator populates the templates in the custom namespace. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure Use the following command to create your custom namespace: 10.4.2. Adding templates to a custom namespace The ssp-operator deploys virtual machine templates to the openshift namespace by default. Templates in the openshift namespace are publicly availably to all users. When a custom namespace is created and templates are added to that namespace, you can modify or delete virtual machine templates in the openshift namespace. To add templates to a custom namespace, edit the HyperConverged custom resource (CR) which contains the ssp-operator . Procedure View the list of virtual machine templates that are available in the openshift namespace. USD oc get templates -n openshift Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged View the list of virtual machine templates that are available in the custom namespace. USD oc get templates -n customnamespace Add the commonTemplatesNamespace attribute and specify the custom namespace. Example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1 1 The custom namespace for deploying templates. Save your changes and exit the editor. The ssp-operator adds virtual machine templates that exist in the default openshift namespace to the custom namespace. 10.4.2.1. Deleting templates from a custom namespace To delete virtual machine templates from a custom namespace, remove the commonTemplateNamespace attribute from the HyperConverged custom resource (CR) and delete each template from that custom namespace. Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Remove the commonTemplateNamespace attribute. apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1 1 The commonTemplatesNamespace attribute to be deleted. Delete a specific template from the custom namespace that was removed. USD oc delete templates -n customnamespace <template_name> Verification Verify that the template was deleted from the custom namespace. USD oc get templates -n customnamespace 10.4.2.2. Additional resources Creating virtual machine templates 10.5. Deleting virtual machine templates You can delete customized virtual machine templates based on Red Hat templates by using the web console. You cannot delete Red Hat templates. 10.5.1. Deleting a virtual machine template in the web console Deleting a virtual machine template permanently removes it from the cluster. Note You can delete customized virtual machine templates. You cannot delete Red Hat-supplied templates. Procedure In the OpenShift Container Platform console, click Virtualization Templates from the side menu. Click the Options menu of a template and select Delete template . Click Delete . | [
"oc create namespace <mycustomnamespace>",
"oc get templates -n openshift",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"oc get templates -n customnamespace",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1",
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1",
"oc delete templates -n customnamespace <template_name>",
"oc get templates -n customnamespace"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/virtual-machine-templates |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements A list of all major enhancements, and new features introduced in this release of Red Hat Trusted Profile Analyzer (RHTPA). The features and enhancements added by this release are: Trusted Profile Analyzer on Red Hat Enterprise Linux With this release, as a Technical Preview, you can deploy RHTPA on Red Hat Enterprise Linux 9 by using an Ansible Playbook. You can customize this deployment solution by using your own PostgreSQL database, OpenID Connect (OIDC) provider, Simple Storage Service (S3), and Simple Queue Service (SQS) services. You can find more information in the RHTPA Deployment Guide . Redesign of the Trusted Profile Analyzer console, and a new CVE impact panel With this release, we designed a new Dashboard homepage that is more intuitive, and gives users more pertinent data at a glance. The Dashboard shows the Common Vulnerabilities and Exposures (CVE) impact on the last 10 software bill of materials (SBOM) uploaded. Along with the impact data, you can also see the date and time, and the number of documents, such as, Common Security Advisory Framework (CSAF) advisories, SBOMs and CVEs recently uploaded. New version of the component registry With this release, we updated the Graphical Understanding of Artifact Composition (GUAC) component registry to version 0.7.2. This newer GUAC version is easier to support and is more reliable than earlier versions. Currently, there is no upgrade path from RHTPA 1.1 to 1.2. You must do a fresh installation of RHTPA 1.2, and re-upload your documents to use the new features of GUAC 0.7.2. Support for CycloneDX 1.5 and SPDX 2.3 With this release, we now support software bill of materials (SBOM) documents formatted in CycloneDX version 1.5, and SPDX version 2.3. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.2/html/release_notes/enhancements |
Chapter 4. Upgrading SAP NetWeaver System | Chapter 4. Upgrading SAP NetWeaver System 4.1. Upgrading an SAP NetWeaver Non-Cloud or BYOS Cloud RHEL system Follow the Upgrading from RHEL 7 to RHEL 8 guide to upgrade your SAP NetWeaver non-cloud or BYOS cloud RHEL 7.9 system to RHEL 8 minor versions, with the following additional steps: At the end of chapter 3.1. Preparing a RHEL 7 system for the upgrade , remove the line containing kernel.sem from file /etc/sysctl.d/sap.conf . At the end of chapter 6. Verifying the post-upgrade state of the RHEL 8 system , verify that the value of kernel.pid_max is 4194304 according to SAP note 2772999 : # sysctl kernel.pid_max If this is not the case, add the following line to file /etc/sysctl.d/sap.conf: kernel.pid_max = 4194304 and then reload the file with: # sysctl -p /etc/sysctl.d/sap.conf You can run the sap_general_preconfigure and sap_netweaver_preconfigure roles in assert mode to verify if your system is compliant with the SAP notes requirements. These roles are part of the RHEL package RHEL System Roles for SAP or the Ansible collection redhat.sap_install . 4.2. Upgrading an SAP NetWeaver Cloud PAYG RHEL system The upgrade of SAP NetWeaver or other SAP application systems hosted on cloud provider PAYG instances is very similar to the upgrade of SAP HANA systems hosted on cloud provider PAYG instances. All non-HANA specific steps listed earlier in the SAP HANA systems upgrade on cloud provider PAYG instances procedure should be applied to complete the upgrade of SAP NetWeaver or other SAP application systems hosted on cloud provider PAYG instances. The only differences are: The upgrade paths, as the Supported In-place Upgrade Paths section states. The desired target release version is defined by the --target option. For SAP HANA systems, it is either 8.8 or 8.10. For SAP applications, there are two latest EUS/E4S RHEL 8.x minor versions (even numbers usually), which are supported by Leapp for non-HANA systems as per the Upgrading from RHEL 7 to RHEL 8 document. Please use the --target option according to your preferences. For more information, please see leapp --help . The repo channel for standalone SAP NetWeaver hosts on Microsoft Azure PAYG instances. When upgrading RHEL for SAP Applications virtual machine on Microsoft Azure PAYG instances, use --channel eus instead of --channel e4s . In other cases, --channel e4s is always used. After the upgrade with --channel eus , the system will have the following Red Hat repositories: USD yum repolist rhel-8-for-x86_64-appstream-eus-rhui-rpms rhel-8-for-x86_64-baseos-eus-rhui-rpms rhel-8-for-x86_64-sap-netweaver-eus-rhui-rpms The repolist may contain other non-Red Hat repositories, namely custom repositories of cloud providers for RHUI configuration. Note Do not use --channel option if you upgrade to RHEL 8.10, as it is the final minor release of RHEL 8, it is not E4S release, and its support cycle differs. For more information, see Red Hat Enterprise Linux Life Cycle . Please keep in mind that the rhel-8-for-x86_64-sap-solutions-eus-rhui-rpms repository should not be present on RHEL for SAP Applications instances, as per Red Hat Enterprise Linux for SAP Offerings on Microsoft Azure FAQ . At some point, it will be removed by Microsoft Azure via the RHUI client rpm automatic update and does not require any action from users. If the automatic RHUI client rpm update has been disabled on your system, for example, by removing the corresponding cron job, the RHUI client rpm can be updated by yum update <package_name> . The in-place upgrade of RHEL 7 with SAP NetWeaver or other SAP applications hosted on cloud providers and using the Red Hat Enterprise Linux for SAP Solutions or Red Hat Enterprise Linux for SAP Applications subscription can be performed only from RHEL 7.9 with normal (non-e4s/eus/... ) repos. RHEL 7.7 or earlier must be updated to RHEL 7.9 first. For special instructions on how to upgrade from RHEL 7.7 or earlier to RHEL 7.9 on cloud providers, refer to How to Perform Update of RHEL for SAP with HA from 7.* to 7.9 on Cloud Providers . As always, run all the upgrade steps, including the preparation and pre-upgrade steps, on a test system first until you have verified that the upgrade can be performed successfully in your production environment. | [
"sysctl kernel.pid_max",
"sysctl -p /etc/sysctl.d/sap.conf",
"yum repolist rhel-8-for-x86_64-appstream-eus-rhui-rpms rhel-8-for-x86_64-baseos-eus-rhui-rpms rhel-8-for-x86_64-sap-netweaver-eus-rhui-rpms"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/asmb_upgrading_netweaver_asmb_upgrading-hana-system |
function::user_char | function::user_char Name function::user_char - Retrieves a char value stored in user space Synopsis Arguments addr the user space address to retrieve the char from Description Returns the char value from a given user space address. Returns zero when user space data is not accessible. | [
"user_char:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-char |
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/viewing-odf-topology_rhodf |
Chapter 164. StrimziPodSetSpec schema reference | Chapter 164. StrimziPodSetSpec schema reference Used in: StrimziPodSet Property Property type Description selector LabelSelector Selector is a label query which matches all the pods managed by this StrimziPodSet . Only matchLabels is supported. If matchExpressions is set, it will be ignored. pods Map array The Pods managed by this StrimziPodSet. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-StrimziPodSetSpec-reference |
Chapter 5. Installing the single-model serving platform | Chapter 5. Installing the single-model serving platform 5.1. About the single-model serving platform For deploying large models such as large language models (LLMs), OpenShift AI includes a single-model serving platform that is based on the KServe component. To install the single-model serving platform, the following components are required: KServe : A Kubernetes custom resource definition (CRD) that orchestrates model serving for all types of models. KServe includes model-serving runtimes that implement the loading of given types of model servers. KServe also handles the lifecycle of the deployment object, storage access, and networking setup. Red Hat OpenShift Serverless : A cloud-native development model that allows for serverless deployments of models. OpenShift Serverless is based on the open source Knative project. Red Hat OpenShift Service Mesh : A service mesh networking layer that manages traffic flows and enforces access policies. OpenShift Service Mesh is based on the open source Istio project. Note Currently, only OpenShift Service Mesh v2 is supported. For more information, see Supported Configurations . You can install the single-model serving platform manually or in an automated fashion: Automated installation If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. For more information, see Configuring automated installation of KServe Manual installation If you have already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you cannot configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. In this situation, you must install KServe manually. For more information, see Manually installing KServe . 5.2. Configuring automated installation of KServe If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and configure its dependencies. Important If you have created a ServiceMeshControlPlane or KNativeServing resource on your cluster, the Red Hat OpenShift AI Operator cannot install KServe and configure its dependencies and the installation does not proceed. In this situation, you must follow the manual installation instructions to install KServe. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). For more information, see Installing the OpenShift CLI . You have installed the Red Hat OpenShift Service Mesh Operator and dependent Operators. Note To enable automated installation of KServe, install only the required Operators for Red Hat OpenShift Service Mesh. Do not perform any additional configuration or create a ServiceMeshControlPlane resource. You have installed the Red Hat OpenShift Serverless Operator. Note To enable automated installation of KServe, install only the Red Hat OpenShift Serverless Operator. Do not perform any additional configuration or create a KNativeServing resource. You have installed the Red Hat OpenShift AI Operator and created a DataScienceCluster object. To add Authorino as an authorization provider so that you can enable token authentication for deployed models, you have installed the Red Hat - Authorino Operator. See Installing the Authorino Operator . Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Install OpenShift Service Mesh as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, validate that the value of the managementState field for the serviceMesh component is set to Managed , as shown: Note Do not change the istio-system namespace that is specified for the serviceMesh component by default. Other namespace values are not supported. Click Save . Based on the configuration you added to the DSCInitialization object, the Red Hat OpenShift AI Operator installs OpenShift Service Mesh. Install both KServe and OpenShift Serverless as follows: In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click the default-dsc DSC object. Click the YAML tab. In the spec.components section, configure the kserve component as shown. Click Save . The preceding configuration creates an ingress gateway for OpenShift Serverless to receive traffic from OpenShift Service Mesh. In this configuration, observe the following details: The configuration shown uses the default ingress certificate configured for OpenShift to secure incoming traffic to your OpenShift cluster and stores the certificate in the knative-serving-cert secret that is specified in the secretName field. The secretName field can only be set at the time of installation. The default value of the secretName field is knative-serving-cert . Subsequent changes to the certificate secret must be made manually. If you did not use the default secretName value during installation, create a new secret named knative-serving-cert in the istio-system namespace, and then restart the istiod-datascience-smcp-<suffix> pod. You can specify the following certificate types by updating the value of the type field: Provided SelfSigned OpenshiftDefaultIngress To use a self-signed certificate or to provide your own, update the value of the secretName field to specify your secret name and change the value of the type field to SelfSigned or Provided . Note If you provide your own certificate, the certificate must specify the domain name used by the ingress controller of your OpenShift cluster. You can check this value by running the following command: USD oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' You must set the value of the managementState field to Managed for both the kserve and serving components. Setting kserve.managementState to Managed triggers automated installation of KServe. Setting serving.managementState to Managed triggers automated installation of OpenShift Serverless. However, installation of OpenShift Serverless will not be triggered if kserve.managementState is not also set to Managed . Verification Verify installation of OpenShift Service Mesh as follows: In the web console, click Workloads Pods . From the project list, select istio-system . This is the project in which OpenShift Service Mesh is installed. Confirm that there are running pods for the service mesh control plane, ingress gateway, and egress gateway. These pods have the naming patterns shown in the following example: Verify installation of OpenShift Serverless as follows: In the web console, click Workloads Pods . From the project list, select knative-serving . This is the project in which OpenShift Serverless is installed. Confirm that there are numerous running pods in the knative-serving project, including activator, autoscaler, controller, and domain mapping pods, as well as pods for the Knative Istio controller (which controls the integration of OpenShift Serverless and OpenShift Service Mesh). An example is shown. Verify installation of KServe as follows: In the web console, click Workloads Pods . From the project list, select redhat-ods-applications .This is the project in which OpenShift AI components are installed, including KServe. Confirm that the project includes a running pod for the KServe controller manager, similar to the following example: 5.3. Manually installing KServe If you have already installed the Red Hat OpenShift Service Mesh Operator and created a ServiceMeshControlPlane resource or if you have installed the Red Hat OpenShift Serverless Operator and created a KNativeServing resource, the Red Hat OpenShift AI Operator cannot install KServe and configure its dependencies. In this situation, you must install KServe manually. Important The procedures in this section show how to perform a new installation of KServe and its dependencies and are intended as a complete installation and configuration reference. If you have already installed and configured OpenShift Service Mesh or OpenShift Serverless, you might not need to follow all steps. If you are unsure about what updates to apply to your existing configuration to use KServe, contact Red Hat Support. 5.3.1. Installing KServe dependencies Before you install KServe, you must install and configure some dependencies. Specifically, you must create Red Hat OpenShift Service Mesh and Knative Serving instances and then configure secure gateways for Knative Serving. Note Currently, only OpenShift Service Mesh v2 is supported. For more information, see Supported Configurations . 5.3.2. Creating an OpenShift Service Mesh instance The following procedure shows how to create a Red Hat OpenShift Service Mesh instance. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have installed the Red Hat OpenShift Service Mesh Operator and dependent Operators. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Create the required namespace for Red Hat OpenShift Service Mesh. You see the following output: Define a ServiceMeshControlPlane object in a YAML file named smcp.yaml with the following contents: For more information about the values in the YAML file, see the Service Mesh control plane configuration reference . Create the service mesh control plane. Verification Verify creation of the service mesh instance as follows: In the OpenShift CLI, enter the following command: The preceding command lists all running pods in the istio-system project. This is the project in which OpenShift Service Mesh is installed. Confirm that there are running pods for the service mesh control plane, ingress gateway, and egress gateway. These pods have the following naming patterns: 5.3.3. Creating a Knative Serving instance The following procedure shows how to install Knative Serving and then create an instance. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have installed the Red Hat OpenShift Serverless Operator. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Check whether the required project (that is, namespace ) for Knative Serving already exists. If the project exists, you see output similar to the following example: If the knative-serving project doesn't already exist, create it. You see the following output: Define a ServiceMeshMember object in a YAML file called default-smm.yaml with the following contents: Create the ServiceMeshMember object in the istio-system namespace. You see the following output: Define a KnativeServing object in a YAML file called knativeserving-istio.yaml with the following contents: The preceding file defines a custom resource (CR) for a KnativeServing object. The CR also adds the following actions to each of the activator and autoscaler pods: 1 Injects an Istio sidecar to the pod. This makes the pod part of the service mesh. 2 Enables the Istio sidecar to rewrite the HTTP liveness and readiness probes for the pod. Note If you configure a custom domain for a Knative service, you can use a TLS certificate to secure the mapped service. To do this, you must create a TLS secret, and then update the DomainMapping CR to use the TLS secret that you have created. For more information, see Securing a mapped service using a TLS certificate in the Red Hat OpenShift Serverless documentation. Create the KnativeServing object in the specified knative-serving namespace. You see the following output: Verification Review the default ServiceMeshMemberRoll object in the istio-system namespace. In the description of the ServiceMeshMemberRoll object, locate the Status.Members field and confirm that it includes the knative-serving namespace. Verify creation of the Knative Serving instance as follows: In the OpenShift CLI, enter the following command: The preceding command lists all running pods in the knative-serving project. This is the project in which you created the Knative Serving instance. Confirm that there are numerous running pods in the knative-serving project, including activator, autoscaler, controller, and domain mapping pods, as well as pods for the Knative Istio controller, which controls the integration of OpenShift Serverless and OpenShift Service Mesh. An example is shown. 5.3.4. Creating secure gateways for Knative Serving To secure traffic between your Knative Serving instance and the service mesh, you must create secure gateways for your Knative Serving instance. The following procedure shows how to use OpenSSL version 3 or later to generate a wildcard certificate and key and then use them to create local and ingress gateways for Knative Serving. Important If you have your own wildcard certificate and key to specify when configuring the gateways, you can skip to step 11 of this procedure. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have created a Knative Serving instance. If you intend to generate a wildcard certificate and key, you have downloaded and installed OpenSSL version 3 or later. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Important If you have your own wildcard certificate and key to specify when configuring the gateways, skip to step 11 of this procedure. Set environment variables to define base directories for generation of a wildcard certificate and key for the gateways. Set an environment variable to define the common name used by the ingress controller of your OpenShift cluster. Set an environment variable to define the domain name used by the ingress controller of your OpenShift cluster. Create the required base directories for the certificate generation, based on the environment variables that you previously set. Create the OpenSSL configuration for generation of a wildcard certificate. Generate a root certificate. Generate a wildcard certificate signed by the root certificate. Verify the wildcard certificate. Export the wildcard key and certificate that were created by the script to new environment variables. Optional: To export your own wildcard key and certificate to new environment variables, enter the following commands: Note In the certificate that you provide, you must specify the domain name used by the ingress controller of your OpenShift cluster. You can check this value by running the following command: USD oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' Create a TLS secret in the istio-system namespace using the environment variables that you set for the wildcard certificate and key. Create a gateways.yaml YAML file with the following contents: 1 Defines a service in the istio-system namespace for the Knative local gateway. 2 Defines an ingress gateway in the knative-serving namespace . The gateway uses the TLS secret you created earlier in this procedure. The ingress gateway handles external traffic to Knative. 3 Defines a local gateway for Knative in the knative-serving namespace. Apply the gateways.yaml file to create the defined resources. You see the following output: Verification Review the gateways that you created. Confirm that you see the local and ingress gateways that you created in the knative-serving namespace, as shown in the following example: 5.3.5. Installing KServe To complete manual installation of KServe, you must install the Red Hat OpenShift AI Operator. Then, you can configure the Operator to install KServe. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Your cluster has a node with 4 CPUs and 16 GB memory. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have created a Red Hat OpenShift Service Mesh instance. You have created a Knative Serving instance. You have created secure gateways for Knative Serving. You have installed the Red Hat OpenShift AI Operator and created a DataScienceCluster object. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. For installation of KServe, configure the OpenShift Service Mesh component as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, add and configure the serviceMesh component as shown: Click Save . For installation of KServe, configure the KServe and OpenShift Serverless components as follows: In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click the default-dsc DSC object. Click the YAML tab. In the spec.components section, configure the kserve component as shown: Within the kserve component, add the serving component, and configure it as shown: Click Save . 5.3.6. Configuring persistent volume claims (PVC) on KServe Enable persistent volume claims (PVC) on your inference service so you can provison persistent storage. For more information about PVC, see Understanding persistent storage . To enable PVC, from the OpenShift AI dashboard, select the Project drop-down and click knative-serving . Then, follow the steps in Enabling PVC support . Verification Verify that the inference service allows PVC as follows: In the OpenShift web console, change into the Administrator perspective. Click Home Search . In Resources , search for InferenceService . Click the name of the inference service. Click the YAML tab. Confirm that volumeMounts appears, similar to the following output: 5.3.7. Disabling KServe dependencies If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. Prerequisites You have used the OpenShift command-line interface (CLI) or web console to disable the KServe component. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Disable the OpenShift Service Mesh component as follows: Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, add the serviceMesh component (if it is not already present) and configure the managementState field as shown: Click Save . Verification In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. The Operator details page opens. In the Conditions section, confirm that there is no ReconcileComplete condition with a status value of Unknown . 5.4. Adding an authorization provider for the single-model serving platform You can add Authorino as an authorization provider for the single-model serving platform. Adding an authorization provider allows you to enable token authentication for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models. The method that you use to add Authorino as an authorization provider depends on how you install the single-model serving platform. The installation options for the platform are described as follows: Automated installation If you have not already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you can configure the Red Hat OpenShift AI Operator to install KServe and its dependencies. You can include Authorino as part of the automated installation process. For more information about automated installation, including Authorino, see Configuring automated installation of KServe . Manual installation If you have already created a ServiceMeshControlPlane or KNativeServing resource on your OpenShift cluster, you cannot configure the Red Hat OpenShift AI Operator to install KServe and its dependencies. In this situation, you must install KServe manually. You must also manually configure Authorino. For more information about manual installation, including Authorino, see Manually installing KServe . 5.4.1. Manually adding an authorization provider You can add Authorino as an authorization provider for the single-model serving platform. Adding an authorization provider allows you to enable token authentication for models that you deploy on the platform, which ensures that only authorized parties can make inference requests to the models. To manually add Authorino as an authorization provider, you must install the Red Hat - Authorino Operator, create an Authorino instance, and then configure the OpenShift Service Mesh and KServe components to use the instance. Important To manually add an authorization provider, you must make configuration updates to your OpenShift Service Mesh instance. To ensure that your OpenShift Service Mesh instance remains in a supported state, make only the updates shown in this section. Prerequisites You have reviewed the options for adding Authorino as an authorization provider and identified manual installation as the appropriate option. See Adding an authorization provider . You have manually installed KServe and its dependencies, including OpenShift Service Mesh. See Manually installing KServe . When you manually installed KServe, you set the value of the managementState field for the serviceMesh component to Unmanaged . This setting is required for manually adding Authorino. See Installing KServe . 5.4.2. Installing the Red Hat Authorino Operator Before you can add Autorino as an authorization provider, you must install the Red Hat - Authorino Operator on your OpenShift cluster. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators OperatorHub . On the OperatorHub page, in the Filter by keyword field, type Red Hat - Authorino . Click the Red Hat - Authorino Operator. On the Red Hat - Authorino Operator page, review the Operator information and then click Install . On the Install Operator page, keep the default values for Update channel , Version , Installation mode , Installed Namespace and Update Approval . Click Install . Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat - Authorino Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. 5.4.3. Creating an Authorino instance When you have installed the Red Hat - Authorino Operator on your OpenShift cluster, you must create an Authorino instance. Prerequisites You have installed the Red Hat - Authorino Operator. You have privileges to add resources to the project in which your OpenShift Service Mesh instance was created. See Creating an OpenShift Service Mesh instance . For more information about OpenShift permissions, see Using RBAC to define and apply permissions . Procedure Open a new terminal window. Log in to the OpenShift command-line interface (CLI) as follows: Create a namespace to install the Authorino instance. Note The automated installation process creates a namespace called redhat-ods-applications-auth-provider for the Authorino instance. Consider using the same namespace name for the manual installation. To enroll the new namespace for the Authorino instance in your existing OpenShift Service Mesh instance, create a new YAML file with the following contents: Save the YAML file. Create the ServiceMeshMember resource on your cluster. To configure an Authorino instance, create a new YAML file as shown in the following example: Save the YAML file. Create the Authorino resource on your cluster. Patch the Authorino deployment to inject an Istio sidecar, which makes the Authorino instance part of your OpenShift Service Mesh instance. Verification Confirm that the Authorino instance is running as follows: Check the pods (and containers) that are running in the namespace that you created for the Authorino instance, as shown in the following example: Confirm that the output resembles the following example: As shown in the example, there is a single running pod for the Authorino instance. The pod has containers for Authorino and for the Istio sidecar that you injected. 5.4.4. Configuring an OpenShift Service Mesh instance to use Authorino When you have created an Authorino instance, you must configure your OpenShift Service Mesh instance to use Authorino as an authorization provider. Important To ensure that your OpenShift Service Mesh instance remains in a supported state, make only the configuration updates shown in the following procedure. Prerequisites You have created an Authorino instance and enrolled the namespace for the Authorino instance in your OpenShift Service Mesh instance. You have privileges to modify the OpenShift Service Mesh instance. See Creating an OpenShift Service Mesh instance . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a user that has privileges to update the OpenShift Service Mesh instance, log in to the OpenShift CLI as shown in the following example: Create a new YAML file with the following contents: Save the YAML file. Use the oc patch command to apply the YAML file to your OpenShift Service Mesh instance. Important You can apply the configuration shown as a patch only if you have not already specified other extension providers in your OpenShift Service Mesh instance. If you have already specified other extension providers, you must manually edit your ServiceMeshControlPlane resource to add the configuration. Verification Verify that your Authorino instance has been added as an extension provider in your OpenShift Service Mesh configuration as follows: Inspect the ConfigMap object for your OpenShift Service Mesh instance: Confirm that you see output similar to the following example, which shows that the Authorino instance has been successfully added as an extension provider. 5.4.5. Configuring authorization for KServe To configure the single-model serving platform to use Authorino, you must create a global AuthorizationPolicy resource that is applied to the KServe predictor pods that are created when you deploy a model. In addition, to account for the multiple network hops that occur when you make an inference request to a model, you must create an EnvoyFilter resource that continually resets the HTTP host header to the one initially included in the inference request. Prerequisites You have created an Authorino instance and configured your OpenShift Service Mesh to use it. You have privileges to update the KServe deployment on your cluster. You have privileges to add resources to the project in which your OpenShift Service Mesh instance was created. See Creating an OpenShift Service Mesh instance . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a user that has privileges to update the KServe deployment, log in to the OpenShift CLI as shown in the following example: Create a new YAML file with the following contents: 1 The name that you specify must match the name of the extension provider that you added to your OpenShift Service Mesh instance. Save the YAML file. Create the AuthorizationPolicy resource in the namespace for your OpenShift Service Mesh instance. Create another new YAML file with the following contents: The EnvoyFilter resource shown continually resets the HTTP host header to the one initially included in any inference request. Create the EnvoyFilter resource in the namespace for your OpenShift Service Mesh instance. Verification Check that the AuthorizationPolicy resource was successfully created. Confirm that you see output similar to the following example: Check that the EnvoyFilter resource was successfully created. Confirm that you see output similar to the following example: | [
"spec: applicationsNamespace: redhat-ods-applications monitoring: managementState: Managed namespace: redhat-ods-monitoring serviceMesh: controlPlane: metricsCollection: Istio name: data-science-smcp namespace: istio-system managementState: Managed",
"spec: components: kserve: managementState: Managed serving: ingressGateway: certificate: secretName: knative-serving-cert type: OpenshiftDefaultIngress managementState: Managed name: knative-serving",
"NAME READY STATUS RESTARTS AGE istio-egressgateway-7c46668687-fzsqj 1/1 Running 0 22h istio-ingressgateway-77f94d8f85-fhsp9 1/1 Running 0 22h istiod-data-science-smcp-cc8cfd9b8-2rkg4 1/1 Running 0 22h",
"NAME READY STATUS RESTARTS AGE activator-7586f6f744-nvdlb 2/2 Running 0 22h activator-7586f6f744-sd77w 2/2 Running 0 22h autoscaler-764fdf5d45-p2v98 2/2 Running 0 22h autoscaler-764fdf5d45-x7dc6 2/2 Running 0 22h autoscaler-hpa-7c7c4cd96d-2lkzg 1/1 Running 0 22h autoscaler-hpa-7c7c4cd96d-gks9j 1/1 Running 0 22h controller-5fdfc9567c-6cj9d 1/1 Running 0 22h controller-5fdfc9567c-bf5x7 1/1 Running 0 22h domain-mapping-56ccd85968-2hjvp 1/1 Running 0 22h domain-mapping-56ccd85968-lg6mw 1/1 Running 0 22h domainmapping-webhook-769b88695c-gp2hk 1/1 Running 0 22h domainmapping-webhook-769b88695c-npn8g 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jb4xk 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jxs5p 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-bgd5r 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-hld75 1/1 Running 0 22h webhook-7d49878bc4-8xjbr 1/1 Running 0 22h webhook-7d49878bc4-s4xx4 1/1 Running 0 22h",
"NAME READY STATUS RESTARTS AGE kserve-controller-manager-7fbb7bccd4-t4c5g 1/1 Running 0 22h odh-model-controller-6c4759cc9b-cftmk 1/1 Running 0 129m odh-model-controller-6c4759cc9b-ngj8b 1/1 Running 0 129m odh-model-controller-6c4759cc9b-vnhq5 1/1 Running 0 129m",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"oc create ns istio-system",
"namespace/istio-system created",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: minimal namespace: istio-system spec: tracing: type: None addons: grafana: enabled: false kiali: name: kiali enabled: false prometheus: enabled: false jaeger: name: jaeger security: dataPlane: mtls: true identity: type: ThirdParty techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s gateways: ingress: service: metadata: labels: knative: ingressgateway proxy: networking: trafficControl: inbound: excludedPorts: - 8444 - 8022",
"oc apply -f smcp.yaml",
"oc get pods -n istio-system",
"NAME READY STATUS RESTARTS AGE istio-egressgateway-7c46668687-fzsqj 1/1 Running 0 22h istio-ingressgateway-77f94d8f85-fhsp9 1/1 Running 0 22h istiod-data-science-smcp-cc8cfd9b8-2rkg4 1/1 Running 0 22h",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"oc get ns knative-serving",
"NAME STATUS AGE knative-serving Active 4d20h",
"oc create ns knative-serving",
"namespace/knative-serving created",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: knative-serving spec: controlPlaneRef: namespace: istio-system name: minimal",
"oc apply -f default-smm.yaml",
"servicemeshmember.maistra.io/default created",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/default-enable-http2: \"true\" spec: workloads: - name: net-istio-controller env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'true' - annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2 name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler ingress: istio: enabled: true config: features: kubernetes.podspec-affinity: enabled kubernetes.podspec-nodeselector: enabled kubernetes.podspec-tolerations: enabled",
"oc apply -f knativeserving-istio.yaml",
"knativeserving.operator.knative.dev/knative-serving created",
"oc describe smmr default -n istio-system",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-7586f6f744-nvdlb 2/2 Running 0 22h activator-7586f6f744-sd77w 2/2 Running 0 22h autoscaler-764fdf5d45-p2v98 2/2 Running 0 22h autoscaler-764fdf5d45-x7dc6 2/2 Running 0 22h autoscaler-hpa-7c7c4cd96d-2lkzg 1/1 Running 0 22h autoscaler-hpa-7c7c4cd96d-gks9j 1/1 Running 0 22h controller-5fdfc9567c-6cj9d 1/1 Running 0 22h controller-5fdfc9567c-bf5x7 1/1 Running 0 22h domain-mapping-56ccd85968-2hjvp 1/1 Running 0 22h domain-mapping-56ccd85968-lg6mw 1/1 Running 0 22h domainmapping-webhook-769b88695c-gp2hk 1/1 Running 0 22h domainmapping-webhook-769b88695c-npn8g 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jb4xk 1/1 Running 0 22h net-istio-controller-7dfc6f668c-jxs5p 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-bgd5r 1/1 Running 0 22h net-istio-webhook-66d8f75d6f-hld75 1/1 Running 0 22h webhook-7d49878bc4-8xjbr 1/1 Running 0 22h webhook-7d49878bc4-s4xx4 1/1 Running 0 22h",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"export BASE_DIR=/tmp/kserve export BASE_CERT_DIR=USD{BASE_DIR}/certs",
"export COMMON_NAME=USD(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}' | awk -F'.' '{print USD(NF-1)\".\"USDNF}')",
"export DOMAIN_NAME=USD(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}')",
"mkdir USD{BASE_DIR} mkdir USD{BASE_CERT_DIR}",
"cat <<EOF> USD{BASE_DIR}/openssl-san.config [ req ] distinguished_name = req [ san ] subjectAltName = DNS:*.USD{DOMAIN_NAME} EOF",
"openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -subj \"/O=Example Inc./CN=USD{COMMON_NAME}\" -keyout USD{BASE_CERT_DIR}/root.key -out USD{BASE_CERT_DIR}/root.crt",
"openssl req -x509 -newkey rsa:2048 -sha256 -days 3560 -nodes -subj \"/CN=USD{COMMON_NAME}/O=Example Inc.\" -extensions san -config USD{BASE_DIR}/openssl-san.config -CA USD{BASE_CERT_DIR}/root.crt -CAkey USD{BASE_CERT_DIR}/root.key -keyout USD{BASE_CERT_DIR}/wildcard.key -out USD{BASE_CERT_DIR}/wildcard.crt openssl x509 -in USD{BASE_CERT_DIR}/wildcard.crt -text",
"openssl verify -CAfile USD{BASE_CERT_DIR}/root.crt USD{BASE_CERT_DIR}/wildcard.crt",
"export TARGET_CUSTOM_CERT=USD{BASE_CERT_DIR}/wildcard.crt export TARGET_CUSTOM_KEY=USD{BASE_CERT_DIR}/wildcard.key",
"export TARGET_CUSTOM_CERT= <path_to_certificate> export TARGET_CUSTOM_KEY= <path_to_key>",
"oc create secret tls wildcard-certs --cert=USD{TARGET_CUSTOM_CERT} --key=USD{TARGET_CUSTOM_KEY} -n istio-system",
"apiVersion: v1 kind: Service 1 metadata: labels: experimental.istio.io/disable-gateway-port-translation: \"true\" name: knative-local-gateway namespace: istio-system spec: ports: - name: http2 port: 80 protocol: TCP targetPort: 8081 selector: knative: ingressgateway type: ClusterIP --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-ingress-gateway 2 namespace: knative-serving spec: selector: knative: ingressgateway servers: - hosts: - '*' port: name: https number: 443 protocol: HTTPS tls: credentialName: wildcard-certs mode: SIMPLE --- apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: knative-local-gateway 3 namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS tls: mode: ISTIO_MUTUAL hosts: - \"*\"",
"oc apply -f gateways.yaml",
"service/knative-local-gateway created gateway.networking.istio.io/knative-ingress-gateway created gateway.networking.istio.io/knative-local-gateway created",
"oc get gateway --all-namespaces",
"NAMESPACE NAME AGE knative-serving knative-ingress-gateway 69s knative-serving knative-local-gateway 2m",
"spec: serviceMesh: managementState: Unmanaged",
"spec: components: kserve: managementState: Managed",
"spec: components: kserve: managementState: Managed serving: managementState: Unmanaged",
"apiVersion: \"serving.kserve.io/v1beta1\" kind: \"InferenceService\" metadata: name: \"sklearn-iris\" spec: predictor: model: runtime: kserve-mlserver modelFormat: name: sklearn storageUri: \"gs://kfserving-examples/models/sklearn/1.0/model\" volumeMounts: - name: my-dynamic-volume mountPath: /tmp/data volumes: - name: my-dynamic-volume persistentVolumeClaim: claimName: my-dynamic-pvc",
"spec: serviceMesh: managementState: Removed",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"oc new-project <namespace_for_authorino_instance>",
"apiVersion: maistra.io/v1 kind: ServiceMeshMember metadata: name: default namespace: <namespace_for_authorino_instance> spec: controlPlaneRef: namespace: <namespace_for_service_mesh_instance> name: <name_of_service_mesh_instance>",
"oc create -f <file_name> .yaml",
"apiVersion: operator.authorino.kuadrant.io/v1beta1 kind: Authorino metadata: name: authorino namespace: <namespace_for_authorino_instance> spec: authConfigLabelSelectors: security.opendatahub.io/authorization-group=default clusterWide: true listener: tls: enabled: false oidcServer: tls: enabled: false",
"oc create -f <file_name> .yaml",
"oc patch deployment <name_of_authorino_instance> -n <namespace_for_authorino_instance> -p '{\"spec\": {\"template\":{\"metadata\":{\"labels\":{\"sidecar.istio.io/inject\":\"true\"}}}} }'",
"oc get pods -n redhat-ods-applications-auth-provider -o=\"custom-columns=NAME:.metadata.name,STATUS:.status.phase,CONTAINERS:.spec.containers[*].name\"",
"NAME STATUS CONTAINERS authorino-6bc64bd667-kn28z Running authorino,istio-proxy",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"spec: techPreview: meshConfig: extensionProviders: - name: redhat-ods-applications-auth-provider envoyExtAuthzGrpc: service: <name_of_authorino_instance> -authorino-authorization. <namespace_for_authorino_instance> .svc.cluster.local port: 50051",
"oc patch smcp <name_of_service_mesh_instance> --type merge -n <namespace_for_service_mesh_instance> --patch-file <file_name> .yaml",
"oc get configmap istio- <name_of_service_mesh_instance> -n <namespace_for_service_mesh_instance> --output=jsonpath= {.data.mesh}",
"defaultConfig: discoveryAddress: istiod-data-science-smcp.istio-system.svc:15012 proxyMetadata: ISTIO_META_DNS_AUTO_ALLOCATE: \"true\" ISTIO_META_DNS_CAPTURE: \"true\" PROXY_XDS_VIA_AGENT: \"true\" terminationDrainDuration: 35s tracing: {} dnsRefreshRate: 300s enablePrometheusMerge: true extensionProviders: - envoyExtAuthzGrpc: port: 50051 service: authorino-authorino-authorization.opendatahub-auth-provider.svc.cluster.local name: opendatahub-auth-provider ingressControllerMode: \"OFF\" rootNamespace: istio-system trustDomain: null%",
"oc login <openshift_cluster_url> -u <username> -p <password>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: kserve-predictor spec: action: CUSTOM provider: name: redhat-ods-applications-auth-provider 1 rules: - to: - operation: notPaths: - /healthz - /debug/pprof/ - /metrics - /wait-for-drain selector: matchLabels: component: predictor",
"oc create -n <namespace_for_service_mesh_instance> -f <file_name> .yaml",
"apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: activator-host-header spec: priority: 20 workloadSelector: labels: component: predictor configPatches: - applyTo: HTTP_FILTER match: listener: filterChain: filter: name: envoy.filters.network.http_connection_manager patch: operation: INSERT_BEFORE value: name: envoy.filters.http.lua typed_config: '@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua inlineCode: | function envoy_on_request(request_handle) local headers = request_handle:headers() if not headers then return end local original_host = headers:get(\"k-original-host\") if original_host then port_seperator = string.find(original_host, \":\", 7) if port_seperator then original_host = string.sub(original_host, 0, port_seperator-1) end headers:replace('host', original_host) end end",
"oc create -n <namespace_for_service_mesh_instance> -f <file_name> .yaml",
"oc get authorizationpolicies -n <namespace_for_service_mesh_instance>",
"NAME AGE kserve-predictor 28h",
"oc get envoyfilter -n <namespace_for_service_mesh_instance>",
"NAME AGE activator-host-header 28h"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed_in_a_disconnected_environment/installing-the-single-model-serving-platform_component-install |
5.90. gnome-power-manager | 5.90. gnome-power-manager 5.90.1. RHBA-2012:0935 - gnome-power-manager bug fix update Updated gnome-power-manager packages that fix one bug are now available for Red Hat Enterprise Linux 6. GNOME Power Manager uses the information and facilities provided by UPower displaying icons and handling user callbacks in an interactive GNOME session. Bug Fix BZ# 676866 After resuming the system or re-enabling the display, an icon could appear in the notification area with an erroneous tooltip that read "Session active, not inhibited, screen idle. If you see this test, your display server is broken and you should notify your distributor." and included a URL to an external web page. This error message was incorrect, had no effect on the system and could be safely ignored. In addition, linking to an external URL from the notification and status area is unwanted. To prevent this, the icon is no longer used for debugging idle problems. All users of gnome-power-manager are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gnome-power-manager |
Chapter 1. Installing the Hot Rod C++ client | Chapter 1. Installing the Hot Rod C++ client Install the Hot Rod C++ client on your host system as a dynamic library. 1.1. C++ compiler requirements Operating system Required compiler Red Hat Enterprise Linux (RHEL) 8 C++ 11 compiler (GCC 8.5.0) RHEL 9 C++ 11 compiler (GCC 11.3.1) Microsoft Windows 7 x64 C++ 11 compiler (Visual Studio 14 2015 Win64, Microsoft Visual C++ 2013 Redistributable Package for the x64 platform) 1.2. Installing Hot Rod C++ clients on Red Hat Enterprise Linux (RHEL) Data Grid provides an RPM distribution of the Hot Rod C++ client for RHEL. Procedure Enable the repository for the Hot Rod C++ client on RHEL. RHEL version Repository RHEL 8 jb-datagrid-8.4-for-rhel-8-x86_64-rpms RHEL 9 jb-datagrid-8.4-for-rhel-9-x86_64-rpms Install the Hot Rod C++ client. Additional resources Enabling or disabling a repository using Red Hat Subscription Management (Red Hat Knowledgebase) Red Hat Package Browser 1.3. Installing Hot Rod C++ clients on Microsoft Windows Data Grid provides an archived version of the Hot Rod C++ client for installation on Windows. Procedure Download the ZIP archive for the Hot Rod C++ client from the Data Grid Software Downloads . Extract the ZIP archive to your file system. | [
"yum install jdg-cpp-client"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_cpp_client_guide/hr_client_installation |
Chapter 7. Migrating Existing Environments from Synchronization to Trust | Chapter 7. Migrating Existing Environments from Synchronization to Trust Synchronization and trust are two possible approaches to indirect integration. Synchronization is generally discouraged, and Red Hat recommends to use the approach based on Active Directory (AD) trust instead. See Section 1.3, "Indirect Integration" for details. This chapter describes how to migrate an existing synchronization-based setup to AD trust. The following migrating options are available in IdM: Section 7.1, "Migrate from Synchronization to Trust Automatically Using ipa-winsync-migrate " Section 7.2, "Migrate from Synchronization to Trust Manually Using ID Views" 7.1. Migrate from Synchronization to Trust Automatically Using ipa-winsync-migrate Important The ipa-winsync-migrate utility is only available on systems running Red Hat Enterprise Linux 7.2 or later. 7.1.1. How Migration Using ipa-winsync-migrate Works The ipa-winsync-migrate utility migrates all synchronized users from an AD forest, while preserving the existing configuration in the Winsync environment and transferring it into the AD trust. For each AD user created by the Winsync agreement, ipa-winsync-migrate creates an ID override in the Default Trust View (see Section 8.1, "Active Directory Default Trust View" ). After the migration completes: The ID overrides for the AD users have the following attributes copied from the original entry in Winsync: Login name ( uid ) UID number ( uidnumber ) GID number ( gidnumber ) Home directory ( homedirectory ) GECOS entry ( gecos ) The user accounts in the AD trust keep their original configuration in IdM, which includes: POSIX attributes User groups Role-based access control rules Host-based access control rules SELinux membership sudo rules The new AD users are added as members of an external IdM group. The original Winsync replication agreement, the original synchronized user accounts, and all local copies of the user accounts are removed. Note The user must make sure before calling ipa-winsync-migrate that there is no entry on the AD side with the same name as the IdM administrator ("admin" by default). Otherwise ipa-winsync-migrate will remove the local copy of the "admin" user account, meaning that it will delete IdM admin user. 7.1.2. How to Migrate Using ipa-winsync-migrate Before you begin: Back up your IdM setup using the ipa-backup utility. See Backing Up and Restoring Identity Management in the Linux Domain Identity, Authentication, and Policy Guide . Reason: The migration affects a significant part of the IdM configuration and many user accounts. Creating a backup enables you to restore your original setup if necessary. To migrate: Create a trust with the synchronized domain. See Chapter 5, Creating Cross-forest Trusts with Active Directory and Identity Management . Run ipa-winsync-migrate and specify the AD realm and the host name of the AD domain controller: If a conflict occurs in the overrides created by ipa-winsync-migrate , information about the conflict is displayed, but the migration continues. Uninstall the Password Sync service from the AD server. This removes the synchronization agreement from the AD domain controllers. See the ipa-winsync-migrate (1) man page for more details about the utility. | [
"ipa-winsync-migrate --realm example.com --server ad.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/migrate-sync-trust |
Chapter 3. Creating build inputs | Chapter 3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Images that require pull secrets When using an input image that requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. Images on mirrored registries that require pull secrets When using an input image from a mirrored registry, if you get a build error: failed to pull image message, you can resolve the error by using either of the following methods: Create an input secret that contains the authentication credentials for the builder image's repository and all known mirrors. In this case, create a pull secret for credentials to the image registry and its mirrors. Use the input secret as the pull secret on the BuildConfig object. 3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master . 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Container Platform performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: A .gitconfig file Basic authentication SSH key authentication Trusted certificate authorities Note You can also use combinations of these configurations to meet your specific needs. 3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "build.openshift.io/v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret by entering the following command: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication A .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites A .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure To create a basic authentication secret with a CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.4.2.8.4. Creating a basic authentication secret with a Git configuration file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and a .gitconfig file. Prerequisites Basic authentication credentials A .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials A .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 3.6. Input secrets and config maps Important To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies. In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure To create a secret object from a JSON or YAML file, enter the following command: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded docker configuration JSON file. 3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret by entering the following command: USD oc create -f <your_yaml_file>.yaml Get the logs by entering the following command: USD oc logs secret-example-pod Delete the pod by entering the following command: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML file of a secret that will create four files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML file of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML file of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML file of a BuildConfig object that populates environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 3.6.4. Adding input secrets and config maps To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps. In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: If the ConfigMap object does not exist, create it by entering the following command: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Tip You can alternatively apply the following YAML to create the config map: apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings> If the Secret object does not exist, create it by entering the following command: USD oc create secret generic secret-mvn \ --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> \ --type=kubernetes.io/ssh-auth This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Tip You can alternatively apply the following YAML to create the input secret: apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, enter the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the build process copies the settings.xml and id_rsa files into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object by entering the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 3.6.6. Docker strategy When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir , then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build. Example of a Dockerfile referencing secret and config map data Important Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself. To prevent the contents of input secrets and config maps from appearing in the build output container images and avoid this removal process altogether, use build volumes in your Docker build strategy instead. 3.6.7. Custom strategy When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options. There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case. The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the USDBUILD environment variable, which includes the full build object. Important If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace. 3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file (only for a Source build strategy) Setting the variables in the BuildConfig object Providing the variables explicitly using the oc start-build --env command (only for builds that are triggered manually) 3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path. Note For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform. The .docker/config.json file is found in your home directory by default and has the following format: auths: index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 docker.io/my-namespace/my-user/my-image: 4 auth: "GzhYWRGU6R2fbclabnRgbkSp="" email: "[email protected]" docker.io/my-namespace: 5 auth: "GzhYWRGU6R2deesfrRgbkSp="" email: "[email protected]" 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. 4 URL and credentials for a specific image in a namespace. 5 URL and credentials for a registry namespace. You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file by entering the following command: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, enter the following command: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespaces. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. | [
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>",
"oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth",
"apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/creating-build-inputs |
Chapter 8. Customizing networks for the Red Hat OpenStack Platform environment | Chapter 8. Customizing networks for the Red Hat OpenStack Platform environment You can customizing the undercloud and overcloud physical networks for your Red Hat OpenStack Platform (RHOSP)environment. 8.1. Customizing undercloud networks You can customize the undercloud network configuration to install the undercloud with specific networking functionality. You can also configure the undercloud and the provisioning network to use IPv6 instead of IPv4 if you have IPv6 nodes and infrastructure. 8.1.1. Configuring undercloud network interfaces Include custom network configuration in the undercloud.conf file to install the undercloud with specific networking functionality. For example, some interfaces might not have DHCP. In this case, you must disable DHCP for these interfaces in the undercloud.conf file so that os-net-config can apply the configuration during the undercloud installation process. Procedure Log in to the undercloud host. Create a new file undercloud-os-net-config.yaml and include the network configuration that you require. In the addresses section, include the local_ip , such as 172.20.0.1/26 . If TLS is enabled in the undercloud, you must also include the undercloud_public_host , such as 172.20.0.2/32 , and the undercloud_admin_host , such as 172.20.0.3/32 . Here is an example: To create a network bond for a specific interface, use the following sample: Include the path to the undercloud-os-net-config.yaml file in the net_config_override parameter in the undercloud.conf file: Note Director uses the file that you include in the net_config_override parameter as the template to generate the /etc/os-net-config/config.yaml file. os-net-config manages the interfaces that you define in the template, so you must perform all undercloud network interface customization in this file. Install the undercloud. Verification After the undercloud installation completes successfully, verify that the /etc/os-net-config/config.yaml file contains the relevant configuration: 8.1.2. Configuring the undercloud for bare metal provisioning over IPv6 If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations: Dual stack IPv4/6 is not available. Tempest validations might not perform correctly. IPv4 to IPv6 migration is not available during upgrades. Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform. Prerequisites An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 networking for the overcloud guide. Procedure Open your undercloud.conf file. Specify the IPv6 address mode as either stateless or stateful: Replace <address_mode> with dhcpv6-stateless or dhcpv6-stateful , based on the mode that your NIC supports. Note When you use the stateful address mode, the firmware, chain loaders, and operating systems might use different algorithms to generate an ID that the DHCP server tracks. DHCPv6 does not track addresses by MAC, and does not provide the same address back if the identifier value from the requester changes but the MAC address remains the same. Therefore, when you use stateful DHCPv6 you must also complete the step to configure the network interface. If you configured your undercloud to use stateful DHCPv6, specify the network interface to use for bare metal nodes: Set the default network interface for bare metal nodes: Specify whether or not the undercloud should create a router on the provisioning network: Replace <true/false> with true to enable routed networks and prevent the undercloud creating a router on the provisioning network. Set it to true if an external data center router is attached to the provisioning network. When true` , the data center router must provide router advertisements. Also, the M` and O` flag settings of the data center router must be consistent with the ipv6_address_mode` setting. Replace <true/false> with false to disable routed networks and create a router on the provisioning network. Set it to false if no external data center routers are attached to the provisioning network. Configure the local IP address, and the IP address for the director Admin API and Public API endpoints over SSL/TLS: Replace <ipv6_address> with the IPv6 address of the undercloud. Optional: Configure the provisioning network that director uses to manage instances: Replace <ipv6_address> with the IPv6 address of the network to use for managing instances when not using the default provisioning network. Replace <ipv6_prefix> with the IP address prefix of the network to use for managing instances when not using the default provisioning network. Configure the DHCP allocation range for provisioning nodes: Replace <ipv6_address_dhcp_start> with the IPv6 address of the start of the network range to use for the overcloud nodes. Replace <ipv6_address_dhcp_end> with the IPv6 address of the end of the network range to use for the overcloud nodes. Optional: Configure the gateway for forwarding traffic to the external network: Replace <ipv6_gateway_address> with the IPv6 address of the gateway when not using the default gateway. Configure the DHCP range to use during the inspection process: Replace <ipv6_address_inspection_start> with the IPv6 address of the start of the network range to use during the inspection process. Replace <ipv6_address_inspection_end> with the IPv6 address of the end of the network range to use during the inspection process. Note This range must not overlap with the range defined by dhcp_start and dhcp_end , but must be in the same IP subnet. Configure an IPv6 nameserver for the subnet: Replace <ipv6_dns> with the DNS nameservers specific to the subnet. Use the virt-customize tool to modify the overcloud image to disable the cloud-init network configuration. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize . 8.2. Customizing overcloud networks You can customize the configuration of the physical network for your overcloud. For example, you can create configuration files for the network interface controllers (NICs) by using the NIC template file in Jinja2 ansible format, j2 . 8.2.1. Defining custom network interface templates You can create a set of custom network interface templates to define the NIC layout for each node in your overcloud environment. The overcloud core template collection contains a set of default NIC layouts for different use cases. You can create a custom NIC template by using a Jinja2 format file with a .j2.yaml extension. Director converts the Jinja2 files to YAML format during deployment. You can then set the network_config property in the overcloud-baremetal-deploy.yaml node definition file to your custom NIC template to provision the networks for a specific node. For more information, see Provisioning bare metal nodes for the overcloud . 8.2.1.1. Creating a custom NIC template Create a NIC template to customise the NIC layout for each node in your overcloud environment. Procedure Copy the sample network configuration template you require from /usr/share/ansible/roles/tripleo_network_config/templates/ to your environment file directory: Replace <sample_NIC_template> with the name of the sample NIC template that you want to copy, for example, single_nic_vlans/single_nic_vlans.j2 . Replace <NIC_template> with the name of your custom NIC template file, for example, single_nic_vlans.j2 . Update the network configuration in your custom NIC template to match the requirements for your overcloud network environment. For information about the properties you can use to configure your NIC template, see Network interface configuration options . For an example NIC template, see Example custom network interfaces . Create or update an existing environment file to enable your custom NIC configuration templates: If your overcloud uses the default internal load balancing, add the following configuration to your environment file to assign predictable virtual IPs for Redis and OVNDBs: Replace <vip_address> with an IP address from outside the allocation pool ranges. 8.2.1.2. Network interface configuration options Use the following tables to understand the available options for configuring network interfaces. interface Defines a single network interface. The network interface name uses either the actual interface name ( eth0 , eth1 , enp0s25 ) or a set of numbered interfaces ( nic1 , nic2 , nic3 ). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2 , instead of named interfaces such as eth0 and eno2 . For example, one host might have interfaces em1 and em2 , while another has eno1 and eno2 , but you can refer to the NICs of both hosts as nic1 and nic2 . The order of numbered interfaces corresponds to the order of named network interface types: ethX interfaces, such as eth0 , eth1 , etc. These are usually onboard interfaces. enoX interfaces, such as eno0 , eno1 , etc. These are usually onboard interfaces. enX interfaces, sorted alpha numerically, such as enp3s0 , enp3s1 , ens3 , etc. These are usually add-on interfaces. The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host. Table 8.1. interface options Option Default Description name Name of the interface. The network interface name uses either the actual interface name ( eth0 , eth1 , enp0s25 ) or a set of numbered interfaces ( nic1 , nic2 , nic3 ). use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the interface. routes A list of routes assigned to the interface. For more information, see routes . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the interface as the primary interface. persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the interface. ethtool_opts Set this option to "rx-flow-hash udp4 sdfn" to improve throughput when you use VXLAN on certain NICs. vlan Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section. For example: Table 8.2. vlan options Option Default Description vlan_id The VLAN ID. device The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the VLAN. routes A list of routes assigned to the VLAN. For more information, see routes . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the VLAN as the primary interface. persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the VLAN. ovs_bond Defines a bond in Open vSwitch to join two or more interfaces together. This helps with redundancy and increases bandwidth. For example: Table 8.3. ovs_bond options Option Default Description name Name of the bond. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bond. routes A list of routes assigned to the bond. For more information, see routes . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the interface as the primary interface. members A sequence of interface objects that you want to use in the bond. ovs_options A set of options to pass to OVS when creating the bond. ovs_extra A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bond. ovs_bridge Defines a bridge in Open vSwitch, which connects multiple interface , ovs_bond , and vlan objects together. The network interface type, ovs_bridge , takes a parameter name . Note If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name . If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge. If you are defining an OVS bridge for the external tripleo network, then retain the values bridge_name and interface_name as your deployment framework automatically replaces these values with an external bridge name and an external interface name, respectively. For example: Note The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. This causes some downtime. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge: You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface. To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond (Linux bridge). If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs. Table 8.4. ovs_bridge options Option Default Description name Name of the bridge. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bridge. routes A list of routes assigned to the bridge. For more information, see routes . mtu 1500 The maximum transmission unit (MTU) of the connection. members A sequence of interface, VLAN, and bond objects that you want to use in the bridge. ovs_options A set of options to pass to OVS when creating the bridge. ovs_extra A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bridge. linux_bond Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter. For example: Table 8.5. linux_bond options Option Default Description name Name of the bond. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bond. routes A list of routes assigned to the bond. See routes . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the interface as the primary interface. members A sequence of interface objects that you want to use in the bond. bonding_options A set of options when creating the bond. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bond. linux_bridge Defines a Linux bridge, which connects multiple interface , linux_bond , and vlan objects together. The external bridge also uses two special values for parameters: bridge_name , which is replaced with the external bridge name. interface_name , which is replaced with the external interface. For example: Table 8.6. linux_bridge options Option Default Description name Name of the bridge. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bridge. routes A list of routes assigned to the bridge. For more information, see routes . mtu 1500 The maximum transmission unit (MTU) of the connection. members A sequence of interface, VLAN, and bond objects that you want to use in the bridge. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bridge. routes Defines a list of routes to apply to a network interface, VLAN, bridge, or bond. For example: Option Default Description ip_netmask None IP and netmask of the destination network. default False Sets this route to a default route. Equivalent to setting ip_netmask: 0.0.0.0/0 . next_hop None The IP address of the router used to reach the destination network. 8.2.1.3. Example custom network interfaces The following examples illustrate how to customize network interface templates. Separate control group and OVS bridge example The following example Controller node NIC template configures the control group separate from the OVS bridge. The template uses five network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces. The template creates the OVS bridges on nic4 and nic5 . Multiple NICs example The following example uses a second NIC to connect to an infrastructure network with DHCP addresses and another NIC for the bond. 8.2.1.4. Customizing NIC mappings for pre-provisioned nodes If you are using pre-provisioned nodes, you can specify the os-net-config mappings for specific nodes by using one of the following methods: Configure the NetConfigDataLookup heat parameter in an environment file, and run the openstack overcloud node provision command without --network-config . Configure the net_config_data_lookup property in your node definition file, overcloud-baremetal-deploy.yaml , and run the openstack overcloud node provision command with --network-config . Note If you are not using pre-provisioned nodes, you must configure the NIC mappings in your node definition file. For more information on configuring the net_config_data_lookup property, see Bare-metal node provisioning attributes . You can assign aliases to the physical interfaces on each node to pre-determine which physical NIC maps to specific aliases, such as nic1 or nic2 , and you can map a MAC address to a specified alias. You can map specific nodes by using the MAC address or DMI keyword, or you can map a group of nodes by using a DMI keyword. The following examples configure three nodes and two node groups with aliases to the physical interfaces. The resulting configuration is applied by os-net-config . On each node, you can see the applied configuration in the interface_mapping section of the /etc/os-net-config/mapping.yaml file. Example 1: Configuring the NetConfigDataLookup parameter in os-net-config-mappings.yaml 1 Maps node1 to the specified MAC address, and assigns nic1 as the alias for the MAC address on this node. 2 Maps node3 to the node with the system UUID "A8C85861-1B16-4803-8689-AFC62984F8F6", and assigns nic1 as the alias for em3 interface on this node. 3 The dmiString parameter must be set to a valid string keyword. For a list of the valid string keywords, see the DMIDECODE(8) man page. 4 Maps all the nodes in nodegroup1 to nodes with the product name "PowerEdge R630", and assigns nic1 , nic2 , and nic3 as the alias for the named interfaces on these nodes. Note Normally, os-net-config registers only the interfaces that are already connected in an UP state. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in a DOWN state. Example 2: Configuring the net_config_data_lookup property in overcloud-baremetal-deploy.yaml - specific nodes Example 3: Configuring the net_config_data_lookup property in overcloud-baremetal-deploy.yaml - all nodes for a role 8.2.2. Composable networks You can create custom composable networks if you want to host specific network traffic on different networks. Director provides a default network topology with network isolation enabled. You can find this configuration in the /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml . The overcloud uses the following pre-defined set of network segments by default: Internal API Storage Storage management Tenant External You can use composable networks to add networks for various services. For example, if you have a network that is dedicated to NFS traffic, you can present it to multiple roles. Director supports the creation of custom networks during the deployment and update phases. You can use these additional networks for bare metal nodes, system management, or to create separate networks for different roles. You can also use them to create multiple sets of networks for split deployments where traffic is routed between networks. 8.2.2.1. Adding a composable network Use composable networks to add networks for various services. For example, if you have a network that is dedicated to storage backup traffic, you can present the network to multiple roles. Procedure List the available sample network configuration files: Copy the sample network configuration file you require from /usr/share/openstack-tripleo-heat-templates/network-data-samples to your environment file directory: Edit your network_data.yaml configuration file and add a section for your new network: Configure any other network attributes for your environment. For more information about the properties you can use to configure network attributes, see Network definition file configuration options . If you are deploying Red Hat Ceph Storage and using NFS, ensure that you include an isolated StorageNFS network. The following example is present in these files: /usr/share/openstack-tripleo-heat-templates/network-data-samples/ganesha.yaml /usr/share/openstack-tripleo-heat-templates/network-data-samples/ganesha-ipv6.yaml Customize these network settings, including the VLAN ID and the subnet ranges. If IPv4 or IPv6 is not necessary, you can omit the corresponding subnet: Example: This network will be shared by the overcloud deployment and a Networking service (neutron) provider network that is set up post-overcloud deployment for consumers like the Compute service (nova) VMs to use to mount shares. Leave a sizable range outside the allocation pool specified in this example for use in the allocation pool for the subnet definition of the overcloud Networking service StorageNFS provider network. When you add an extra composable network that contains a virtual IP, and want to map some API services to this network, use the CloudName{network.name} definition to set the DNS name for the API endpoint: Example: Copy the sample network VIP definition template you require from /usr/share/openstack-tripleo-heat-templates/network-data-samples to your environment file directory. The following example copies the vip-data-default-network-isolation.yaml to a local environment file named vip_data.yaml : Edit your vip_data.yaml configuration file. The virtual IP data is a list of virtual IP address definitions, each containing the name of the network where the IP address is allocated: Replace <vip_address> with the required virtual IP address. For more information about the properties you can use to configure network VIP attributes in your VIP definition file, see Network VIP attribute properties . Copy a sample network configuration template. Jinja2 templates are used to define NIC configuration templates. Browse the examples provided in the /usr/share/ansible/roles/tripleo_network_config/templates/ directory, if one of the examples matches your requirements, use it. If the examples do not match your requirements, copy a sample configuration file, and modify it for your needs: Edit your single_nic_vlans.j2 configuration file: Set the network_config template in the overcloud-baremetal-deploy.yaml configuration file: If you are provisioning a StorageNFS network for using a CephFS-NFS back end with the Shared File Systems service (manila), edit the Controller or ControllerStorageNfs sections instead of the network_config section because the StorageNFS network and its VIP are connected to the Controller nodes: Provision the overcloud networks. This action generates an output file which will be used an an environment file when deploying the overcloud: Replace <networks_definition_file> with the name of your networks definition file, for example, network_data.yaml or the name of your StorageNFS network file, for example, network_data_ganesha.yaml . Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-networks-deployed.yaml . Provision the network VIPs and generate the vip-deployed-environment.yaml file. You use this file when you deploy the overcloud: Replace <stack> with the name of the stack for which the network VIPs are provisioned. If not specified, the default is overcloud. Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-vip-deployed.yaml . 8.2.2.2. Including a composable network in a role You can assign composable networks to the overcloud roles defined in your environment. For example, you might include a custom StorageBackup network with your Ceph Storage nodes, or you might include a custom StorageNFS network for using CephFS-NFS with the Shared File Systems service (manila). If you used the ControllerStorageNfs role that is included by default in director, then a StorageNFS network is already added for you. Procedure If you do not already have a custom roles_data.yaml file, copy the default to your home directory: Edit the custom roles_data.yaml file. Include the network name in the networks list for the role that you want to add the network to. In this example, you add the StorageBackup network to the Ceph Storage role: In this example, you add the StorageNFS network to the Controller node: After you add custom networks to their respective roles, save the file. When you run the openstack overcloud deploy command, include the custom roles_data.yaml file using the -r option. Without the -r option, the deployment command uses the default set of roles with their respective assigned networks. 8.2.2.3. Assigning OpenStack services to composable networks Each OpenStack service is assigned to a default network type in the resource registry. These services are bound to IP addresses within the network type's assigned network. Although the OpenStack services are divided among these networks, the number of actual physical networks can differ as defined in the network environment file. You can reassign OpenStack services to different network types by defining a new network map in an environment file, for example, /home/stack/templates/service-reassignments.yaml . The ServiceNetMap parameter determines the network types that you want to use for each service. For example, you can reassign the Storage Management network services to the Storage Backup Network by modifying the highlighted sections: Changing these parameters to storage_backup places these services on the Storage Backup network instead of the Storage Management network. This means that you must define a set of parameter_defaults only for the Storage Backup network and not the Storage Management network. Director merges your custom ServiceNetMap parameter definitions into a pre-defined list of defaults that it obtains from ServiceNetMapDefaults and overrides the defaults. Director returns the full list, including customizations, to ServiceNetMap , which is used to configure network assignments for various services. For example, GaneshaNetwork is the default service network for the NFS Gateway for CephFS-NFS. This network defaults to storage_nfs while falling back to external or ctlplane networks. If you are using a different network instead of the default isolated StorageNFS network, you must update the default network by using a ServiceNetMap parameter definition. Example: Replace <manila_nfs_network> with the name of your custom network. Service mappings apply to networks that use vip: true in the network_data.yaml file for nodes that use Pacemaker. The overcloud load balancer redirects traffic from the VIPs to the specific service endpoints. Note You can find a full list of default services in the ServiceNetMapDefaults parameter in the /usr/share/openstack-tripleo-heat-templates/network/service_net_map.j2.yaml file. 8.2.2.4. Enabling custom composable networks Use one of the default NIC templates to enable custom composable networks. In this example, use the Single NIC with VLANs template, ( custom_single_nic_vlans ). Procedure Source the stackrc undercloud credentials file: Provision the overcloud networks: Provision the network VIPs: Provision the overcloud nodes: Construct your openstack overcloud deploy command, specifying the configuration files and templates in the required order, for example: This example command deploys the composable networks, including your additional custom networks, across nodes in your overcloud. 8.2.2.5. Renaming the default networks You can use the network_data.yaml file to modify the user-visible names of the default networks: InternalApi External Storage StorageMgmt Tenant To change these names, do not modify the name field. Instead, change the name_lower field to the new name for the network and update the ServiceNetMap with the new name. Procedure In your network_data.yaml file, enter new names in the name_lower parameter for each network that you want to rename: Include the default value of the name_lower parameter in the service_net_map_replace parameter: 8.2.3. Additional overcloud network configuration This chapter follows on from the concepts and procedures outlined in Section 8.2.1, "Defining custom network interface templates" and provides some additional information to help configure parts of your overcloud network. 8.2.3.1. Configuring routes and default routes You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP. Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false for interfaces other than the interface that uses the default route. For example, you might want a DHCP interface ( nic3 ) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface ( nic2 ): Note The defroute parameter applies only to routes obtained through DHCP. To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network: 8.2.3.2. Configuring policy-based routing To configure unlimited access from different networks on Controller nodes, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same. For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface. Red Hat OpenStack Platform uses the os-net-config tool to configure network properties for your overcloud nodes. The os-net-config tool manages the following network routing on Controller nodes: Routing tables in the /etc/iproute2/rt_tables file IPv4 rules in the /etc/sysconfig/network-scripts/rule-{ifname} file IPv6 rules in the /etc/sysconfig/network-scripts/rule6-{ifname} file Routing table specific routes in the /etc/sysconfig/network-scripts/route-{ifname} Prerequisites You have installed the undercloud successfully. For more information, see Installing director on the undercloud in the Installing and managing Red Hat OpenStack Platform with director guide. Procedure Create the interface entries in a custom NIC template from the /home/stack/templates/custom-nics directory, define a route for the interface, and define rules that are relevant to your deployment: Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment: Verification Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly: 8.2.3.3. Configuring jumbo frames The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default. The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface. The Storage, Storage Management, Internal API, and Tenant networks can all benefit from jumbo frames. You can alter the value of the mtu in the jinja2 template or in the network_data.yaml file. If you set the value in the network_data.yaml file it is rendered during deployment. Warning Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any Floating IP interfaces. 1 mtu value updated directly in the jinja2 template. 2 mtu value is taken from the network_data.yaml file during deployment. 8.2.3.4. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network. ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets. But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps. These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets. Note In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example: VM1 is on Network1 with an MTU of 1300. VM2 is on Network2 with an MTU of 1200. A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss. With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support. Procedure Set the following value in the [ovn] section of ml2_conf.ini: 8.2.3.5. Configuring the native VLAN on a trunked interface If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface. The following example configures a bonded interface where the External network is on the native VLAN: Note When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications. 8.2.3.6. Increasing the maximum number of connections that netfilter tracks The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment. Prerequisites A successful RHOSP undercloud installation. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example Your environment file must contain the keywords parameter_defaults and ExtraSysctlSettings . Enter a new value for the maximum number of connections that netfilter can track in the variable, net.nf_conntrack_max . Example In this example, you can set the conntrack limit across all hosts in your RHOSP deployment: Use the <role>Parameter parameter to set the conntrack limit for a specific role: Replace <role> with the name of the role. For example, use ControllerParameters to set the conntrack limit for the Controller role, or ComputeParameters to set the conntrack limit for the Compute role. Replace <simultaneous_connections> with the quantity of simultaneous connections that you want to allow. Example In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment: Note The default value for net.nf_conntrack_max is 500000 connections. The maximum value is: 4294967295 . Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Additional resources Environment files Including environment files in overcloud creation 8.2.4. Network interface bonding You can use various bonding options in your custom network configuration. 8.2.4.1. Network interface bonding for overcloud nodes You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput. Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds. Table 8.7. Supported interface bonding types Bond type Type value Allowed bridge types Allowed members OVS kernel bonds ovs_bond ovs_bridge interface OVS-DPDK bonds ovs_dpdk_bond ovs_user_bridge ovs_dpdk_port Linux kernel bonds linux_bond ovs_bridge or linux_bridge interface Important Do not combine ovs_bridge and ovs_user_bridge on the same node. 8.2.4.2. Creating Open vSwitch (OVS) bonds You create OVS bonds in your network interface templates. For example, you can create a bond as part of an OVS user space bridge: In this example, you create the bond from two DPDK ports. The ovs_options parameter contains the bonding options. You can configure a bonding options in a network environment file with the BondInterfaceOvsOptions parameter: 8.2.4.3. Open vSwitch (OVS) bonding options You can set various Open vSwitch (OVS) bonding options with the ovs_options heat parameter in your NIC template files. The active-backup, balance-tlb, balance-alb and balance-slb modes do not require any specific configuration of the switch. bond_mode=balance-slb Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver, although unlike mode 2, balance-slb does not require any specific configuration of the swtich. You can use the balance-slb mode to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active | passive | off] Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup . other-config:lacp-fallback-ab=true Set active-backup as the bond mode if LACP fails. other_config:lacp-time=[fast | slow] Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow. other_config:bond-detect-mode=[miimon | carrier] Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier. other_config:bond-miimon-interval=100 If using miimon, set the heartbeat interval (milliseconds). bond_updelay=1000 Set the interval (milliseconds) that a link must be up to be activated to prevent flapping. other_config:bond-rebalance-interval=10000 Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members. 8.2.4.4. Using Link Aggregation Control Protocol (LACP) with Open vSwitch (OVS) bonding modes You can use bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance. Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options. Important On control and storage networks, Red Hat recommends that you use Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond/LACP/VLAN configuration provides NIC management without the OVS disruption potential. Table 8.8. LACP options for OVS kernel and OVS-DPDK bond modes Objective OVS bond mode Compatible LACP options Notes High availability (active-passive) active-backup active , passive , or off Increased throughput (active-active) balance-slb active , passive , or off Performance is affected by extra parsing per packet. There is a potential for vhost-user lock contention. balance-tcp active or passive As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention. LACP must be configured and enabled. Set lb-output-action=true . For example: 8.2.4.5. Creating Linux bonds You create Linux bonds in your network interface templates. For example, you can create a Linux bond that bonds two interfaces: The bonding_options parameter sets the specific bonding options for the Linux bond. mode Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see "Upstream Switch Configuration Depending on the Bonding Modes" in the Red Hat Enterprise Linux 9 Configuring and Managing Networking guide. lacp_rate Defines whether LACP packets are sent every 1 second, or every 30 seconds. updelay Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. miimon The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. Use the following additional examples as guides to configure your own Linux bonds: Linux bond set to active-backup mode with one VLAN: Linux bond on OVS bridge. Bond set to 802.3ad LACP mode with one VLAN: Important You must set up min_viable_mtu_ctlplane before you can use it. Copy /usr/share/ansible/roles/tripleo_network_config/templates/2_linux_bonds_vlans.j2 to your templates directory and modify it for your needs. For more information, see Composable networks , and refer to the steps that pertain to the network configuration template. 8.2.5. Updating the format of your network configuration files The format of the network configuration yaml files has changed in Red Hat OpenStack Platform (RHOSP) 17.0. The structure of the network configuration file network_data.yaml has changed, and the NIC template file format has changed from yaml file format to Jinja2 ansible format, j2 . You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the following conversion tools: convert_v1_net_data.py convert_heat_nic_config_to_ansible_j2.py You can also manually convert your existing NIC template files. The files you need to convert include the following: network_data.yaml Controller NIC templates Compute NIC templates Any other custom network files 8.2.5.1. Updating the format of your network configuration file The format of the network configuration yaml file has changed in Red Hat OpenStack Platform (RHOSP) 17.0. You can convert your existing network configuration file in your current deployment to the RHOSP 17+ format by using the convert_v1_net_data.py conversion tool. Procedure Download the conversion tool: /usr/share/openstack-tripleo-heat-templates/tools/convert_v1_net_data.py Convert your RHOSP 16+ network configuration file to the RHOSP 17+ format: Replace <network_config> with the name of the existing configuration file that you want to convert, for example, network_data.yaml . 8.2.5.2. Automatically converting NIC templates to Jinja2 Ansible format The NIC template file format has changed from yaml file format to Jinja2 Ansible format, j2 , in Red Hat OpenStack Platform (RHOSP) 17.0. You can convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py conversion tool. You can also manually convert your existing NIC template files. For more information, see Manually converting NIC templates to Jinja2 Ansible format . The files you need to convert include the following: Controller NIC templates Compute NIC templates Any other custom network files Procedure Log in to the undercloud as the stack user. Source the stackrc file: Copy the conversion tool to your current directory on the undercloud: Convert your Compute and Controller NIC template files, and any other custom network files, to the Jinja2 Ansible format: Replace <overcloud> with the name or UUID of the overcloud stack. If --stack is not specified, the stack defaults to overcloud . Note You can use the --stack option only on your RHOSP 16 deployment because it requires the Orchestration service (heat) to be running on the undercloud node. Starting with RHOSP 17, RHOSP deployments use ephemeral heat, which runs the Orchestration service in a container. If the Orchestration service is not available, or you have no stack, then use the --standalone option instead of --stack . Replace <network_config.yaml> with the name of the configuration file that describes the network deployment, for example, network_data.yaml . Replace <network_template> with the name of the network configuration file you want to convert. Repeat this command until you have converted all your custom network configuration files. The convert_heat_nic_config_to_ansible_j2.py script generates a .j2 file for each yaml file you pass to it for conversion. Inspect each generated .j2 file to ensure the configuration is correct and complete for your environment, and manually address any comments generated by the tool that highlight where the configuration could not be converted. For more information about manually converting the NIC configuration to Jinja2 format, see Heat parameter to Ansible variable mappings . Configure the *NetworkConfigTemplate parameters in your network-environment.yaml file to point to the generated .j2 files: Delete the resource_registry mappings from your network-environment.yaml file for the old network configuration files: 8.2.5.3. Manually converting NIC templates to Jinja2 Ansible format The NIC template file format has changed from yaml file format to Jinja2 Ansible format, j2 , in Red Hat OpenStack Platform (RHOSP) 17.0. You can manually convert your existing NIC template files. You can also convert your existing NIC template files in your current deployment to the Jinja2 format by using the convert_heat_nic_config_to_ansible_j2.py conversion tool. For more information, see Automatically converting NIC templates to Jinja2 ansible format . The files you need to convert include the following: Controller NIC templates Compute NIC templates Any other custom network files Procedure Create a Jinja2 template. You can create a new template by copying an example template from the /usr/share/ansible/roles/tripleo_network_config/templates/ directory on the undercloud node. Replace the heat intrinsic functions with Jinja2 filters. For example, use the following filter to calculate the min_viable_mtu : Use Ansible variables to configure the network properties for your deployment. You can configure each individual network manually, or programatically configure each network by iterating over role_networks : To manually configure each network, replace each get_param function with the equivalent Ansible variable. For example, if your current deployment configures vlan_id by using get_param: InternalApiNetworkVlanID , then add the following configuration to your template: Table 8.9. Example network property mapping from heat parameters to Ansible vars yaml file format Jinja2 ansible format, j2 To programatically configure each network, add a Jinja2 for-loop structure to your template that retrieves the available networks by their role name by using role_networks . Example For a full list of the mappings from the heat parameter to the Ansible vars equivalent, see Heat parameter to Ansible variable mappings . Configure the *NetworkConfigTemplate parameters in your network-environment.yaml file to point to the generated .j2 files: Delete the resource_registry mappings from your network-environment.yaml file for the old network configuration files: 8.2.5.4. Heat parameter to Ansible variable mappings The NIC template file format has changed from yaml file format to Jinja2 ansible format, j2 , in Red Hat OpenStack Platform (RHOSP) 17.x. To manually convert your existing NIC template files to Jinja2 ansible format, you can map your heat parameters to Ansible variables to configure the network properties for pre-provisioned nodes in your deployment. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision without specifying the --network-config optional argument. For example, if your current deployment configures vlan_id by using get_param: InternalApiNetworkVlanID , then replace it with the following configuration in your new Jinja2 template: Note If you provision your nodes by running openstack overcloud node provision with the --network-config optional argument, you must configure the network properties for your deploying by using the parameters in overcloud-baremetal-deploy.yaml . For more information, see Heat parameter to provisioning definition file mappings . The following table lists the available mappings from the heat parameter to the Ansible vars equivalent. Table 8.10. Mappings from heat parameters to Ansible vars Heat parameter Ansible vars BondInterfaceOvsOptions {{ bond_interface_ovs_options }} ControlPlaneIp {{ ctlplane_ip }} ControlPlaneDefaultRoute {{ ctlplane_gateway_ip }} ControlPlaneMtu {{ ctlplane_mtu }} ControlPlaneStaticRoutes {{ ctlplane_host_routes }} ControlPlaneSubnetCidr {{ ctlplane_subnet_cidr }} DnsSearchDomains {{ dns_search_domains }} DnsServers {{ ctlplane_dns_nameservers }} Note This Ansible variable is populated with the IP address configured in undercloud.conf for DEFAULT/undercloud_nameservers and %SUBNET_SECTION%/dns_nameservers . The configuration of %SUBNET_SECTION%/dns_nameservers overrides the configuration of DEFAULT/undercloud_nameservers , so that you can use different DNS servers for the undercloud and the overcloud, and different DNS servers for nodes on different provisioning subnets. NumDpdkInterfaceRxQueues {{ num_dpdk_interface_rx_queues }} Configuring a heat parameter that is not listed in the table To configure a heat parameter that is not listed in the table, you must configure the parameter as a {{role.name}}ExtraGroupVars . After you have configured the parameter as a {{role.name}}ExtraGroupVars parameter, you can then use it in your new template. For example, to configure the StorageSupernet parameter, add the following configuration to your network configuration file: You can then add {{ storage_supernet }} to your Jinja2 template. Warning This process will not work if the --network-config option is used with node provisioning. Users requiring custom vars should not use the --network-config option. Instead, after creating the Heat stack, apply the node network configuration to the config-download ansible run. Converting the Ansible variable syntax to programmatically configure each network When you use a Jinja2 for-loop structure to retrieve the available networks by their role name by iterating over role_networks , you need to retrieve the lower case name for each network role to prepend to each property. Use the following structure to convert the Ansible vars from the above table to the required syntax: {{ lookup('vars', networks_lower[network] ~ '_<property>') }} Replace <property> with the property that you are setting, for example, ip , vlan_id , or mtu . For example, to populate the value for each NetworkVlanID dynamically, replace {{ <network_name>_vlan_id }} with the following configuration: 8.2.5.5. Heat parameter to provisioning definition file mappings If you provision your nodes by running the openstack overcloud node provision command with the --network-config optional argument, you must configure the network properties for your deployment by using the parameters in the node definition file overcloud-baremetal-deploy.yaml . If your deployment uses pre-provisioned nodes, you can map your heat parameters to Ansible variables to configure the network properties. You can also map your heat parameters to Ansible variables if you run openstack overcloud node provision without specifying the --network-config optional argument. For more information about configuring network properties by using Ansible variables, see Heat parameter to Ansible variable mappings . The following table lists the available mappings from the heat parameter to the network_config property equivalent in the node definition file overcloud-baremetal-deploy.yaml . Table 8.11. Mappings from heat parameters to node definition file overcloud-baremetal-deploy.yaml Heat parameter network_config property BondInterfaceOvsOptions bond_interface_ovs_options DnsSearchDomains dns_search_domains NetConfigDataLookup net_config_data_lookup NeutronPhysicalBridge physical_bridge_name NeutronPublicInterface public_interface_name NumDpdkInterfaceRxQueues num_dpdk_interface_rx_queues {{role.name}}NetworkConfigUpdate network_config_update The following table lists the available mappings from the heat parameter to the property equivalent in the networks definition file network_data.yaml . Table 8.12. Mappings from heat parameters to networks definition file network_data.yaml Heat parameter IPv4 network_data.yaml property IPv6 network_data.yaml property <network_name>IpSubnet <network_name>NetworkVlanID <network_name>Mtu <network_name>InterfaceDefaultRoute <network_name>InterfaceRoutes 8.2.5.6. Changes to the network data schema The network data schema was updated in Red Hat OpenStack Platform (RHOSP) 17. The main differences between the network data schema used in RHOSP 16 and earlier, and network data schema used in RHOSP 17 and later, are as follows: The base subnet has been moved to the subnets map. This aligns the configuration for non-routed and routed deployments, such as spine-leaf networking. The enabled option is no longer used to ignore disabled networks. Instead, you must remove disabled networks from the configuration file. The compat_name option is no longer required as the heat resource that used it has been removed. The following parameters are no longer valid at the network level: ip_subnet , gateway_ip , allocation_pools , routes , ipv6_subnet , gateway_ipv6 , ipv6_allocation_pools , and routes_ipv6 . These parameters are still used at the subnet level. A new parameter, physical_network , has been introduced, that is used to create ironic ports in metalsmith . New parameters network_type and segmentation_id replace {{network.name}}NetValueSpecs used to set the network type to vlan . The following parameters have been deprecated in RHOSP 17: {{network.name}}NetCidr {{network.name}}SubnetName {{network.name}}Network {{network.name}}AllocationPools {{network.name}}Routes {{network.name}}SubnetCidr_{{subnet}} {{network.name}}AllocationPools_{{subnet}} {{network.name}}Routes_{{subnet}} | [
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - type: interface name: nic2",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - name: bond-ctlplane type: linux_bond use_dhcp: false bonding_options: \"mode=active-backup\" mtu: 1500 members: - type: interface name: nic2 - type: interface name: nic3",
"[DEFAULT] net_config_override=undercloud-os-net-config.yaml",
"network_config: - name: br-ctlplane type: ovs_bridge use_dhcp: false dns_servers: - 192.168.122.1 domain: lab.example.com ovs_extra: - \"br-set-external-id br-ctlplane bridge-id br-ctlplane\" addresses: - ip_netmask: 172.20.0.1/26 - ip_netmask: 172.20.0.2/32 - ip_netmask: 172.20.0.3/32 members: - type: interface name: nic2",
"[DEFAULT] ipv6_address_mode = <address_mode>",
"[DEFAULT] ipv6_address_mode = dhcpv6-stateful ironic_enabled_network_interfaces = neutron,flat",
"[DEFAULT] ironic_default_network_interface = neutron",
"[DEFAULT] enable_routed_networks: <true/false>",
"[DEFAULT] local_ip = <ipv6_address> undercloud_admin_host = <ipv6_address> undercloud_public_host = <ipv6_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end>",
"[ctlplane-subnet] cidr = <ipv6_address>/<ipv6_prefix> dhcp_start = <ipv6_address_dhcp_start> dhcp_end = <ipv6_address_dhcp_end> gateway = <ipv6_gateway_address> inspection_iprange = <ipv6_address_inspection_start>,<ipv6_address_inspection_end> dns_nameservers = <ipv6_dns>",
"cp /usr/share/ansible/roles/tripleo_network_config/templates/<sample_NIC_template> /home/stack/templates/<NIC_template>",
"parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans.j2' CephStorageNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans_storage.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/single_nic_vlans.j2'",
"parameter_defaults: RedisVirtualFixedIPs: [{'ip_address':'<vip_address>'}] OVNDBsVirtualFixedIPs: [{'ip_address':'<vip_address>'}]",
"- type: interface name: nic2",
"- type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}",
"members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}",
"- type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}",
"- type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: \"mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4\" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }}",
"- type: linux_bridge name: bridge_name mtu: get_attr: [MinViableMtu, value] use_dhcp: false dns_servers: get_param: DnsServers domain: get_param: DnsSearchDomains addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: list_concat_unique: - get_param: ControlPlaneStaticRoutes",
"- type: linux_bridge name: bridge_name routes: {{ [ctlplane_host_routes] | flatten | unique }}",
"network_config: - type: interface name: nic1 mtu: {{ ctlplane_mtu }} use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} - type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: {{ bond_interface_ovs_options }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }} {% for network in role_networks if not network.startswith('Tenant') %} - type: vlan device: bond_api mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} members: - type: linux_bond name: bond-data mtu: {{ min_viable_mtu_dataplane }} bonding_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic4 mtu: {{ min_viable_mtu_dataplane }} primary: true - type: interface name: nic5 mtu: {{ min_viable_mtu_dataplane }} {% for network in role_networks if network.startswith('Tenant') %} - type: vlan device: bond-data mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }}",
"network_config: # Add a DHCP infrastructure network to nic2 - type: interface name: nic2 mtu: {{ tenant_mtu }} use_dhcp: true primary: true - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ [tenant_host_routes] | flatten | unique }} - type: ovs_bridge name: br-bond mtu: {{ external_mtu }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false members: - type: interface name: nic10 mtu: {{ external_mtu }} use_dhcp: false primary: true - type: vlan mtu: {{ external_mtu }} vlan_id: {{ external_vlan_id }} addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}",
"NetConfigDataLookup: node1: 1 nic1: \"00:c8:7c:e6:f0:2e\" node2: nic1: \"00:18:7d:99:0c:b6\" node3: 2 dmiString: \"system-uuid\" 3 id: 'A8C85861-1B16-4803-8689-AFC62984F8F6' nic1: em3 # Dell PowerEdge nodegroup1: 4 dmiString: \"system-product-name\" id: \"PowerEdge R630\" nic1: em3 nic2: em1 nic3: em2 # Cisco UCS B200-M4\" nodegroup2: dmiString: \"system-product-name\" id: \"UCSB-B200-M4\" nic1: enp7s0 nic2: enp6s0",
"- name: Controller count: 3 defaults: network_config: net_config_data_lookup: node1: nic1: \"00:c8:7c:e6:f0:2e\" node2: nic1: \"00:18:7d:99:0c:b6\" node3: dmiString: \"system-uuid\" id: 'A8C85861-1B16-4803-8689-AFC62984F8F6' nic1: em3 # Dell PowerEdge nodegroup1: dmiString: \"system-product-name\" id: \"PowerEdge R630\" nic1: em3 nic2: em1 nic3: em2 # Cisco UCS B200-M4\" nodegroup2: dmiString: \"system-product-name\" id: \"UCSB-B200-M4\" nic1: enp7s0 nic2: enp6s0",
"- name: Controller count: 3 defaults: network_config: template: templates/net_config_bridge.j2 default_route_network: - external instances: - hostname: overcloud-controller-0 network_config: <name/groupname>: nic1: 'XX:XX:XX:XX:XX:XX' nic2: 'YY:YY:YY:YY:YY:YY' nic3: 'ens1f0'",
"ll /usr/share/openstack-tripleo-heat-templates/network-data-samples/ -rw-r--r--. 1 root root 1554 May 11 23:04 default-network-isolation-ipv6.yaml -rw-r--r--. 1 root root 1181 May 11 23:04 default-network-isolation.yaml -rw-r--r--. 1 root root 1126 May 11 23:04 ganesha-ipv6.yaml -rw-r--r--. 1 root root 1100 May 11 23:04 ganesha.yaml -rw-r--r--. 1 root root 3556 May 11 23:04 legacy-routed-networks-ipv6.yaml -rw-r--r--. 1 root root 2929 May 11 23:04 legacy-routed-networks.yaml -rw-r--r--. 1 root root 383 May 11 23:04 management-ipv6.yaml -rw-r--r--. 1 root root 290 May 11 23:04 management.yaml -rw-r--r--. 1 root root 136 May 11 23:04 no-networks.yaml -rw-r--r--. 1 root root 2725 May 11 23:04 routed-networks-ipv6.yaml -rw-r--r--. 1 root root 2033 May 11 23:04 routed-networks.yaml -rw-r--r--. 1 root root 943 May 11 23:04 vip-data-default-network-isolation.yaml -rw-r--r--. 1 root root 848 May 11 23:04 vip-data-fixed-ip.yaml -rw-r--r--. 1 root root 1050 May 11 23:04 vip-data-routed-networks.yaml",
"cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml /home/stack/templates/network_data.yaml",
"- name: StorageBackup vip: false name_lower: storage_backup subnets: storage_backup_subnet: ip_subnet: 172.16.6.0/24 allocation_pools: - start: 172.16.6.4 - end: 172.16.6.250 gateway_ip: 172.16.6.1",
"- name: StorageNFS enabled: true vip: true name_lower: storage_nfs subnets: storage_nfs_subnet: vlan: 70 ip_subnet: 172.17.0.0/20 allocation_pools: - start: 172.17.0.4 - end: 172.17.0.250 storage_nfs_ipv6_subnet: ipv6_subnet: fd00:fd00:fd00:7000::/64 ipv6_allocation_pools: - start: fd00:fd00:fd00:7000::4 - end: fd00:fd00:fd00:7000::fffe",
"CloudName{{network.name}}",
"parameter_defaults: CloudNameOcProvisioning: baremetal-vip.example.com",
"cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/vip-data-default-network-isolation.yaml /home/stack/templates/vip_data.yaml",
"- network: storage_mgmt dns_name: overcloud - network: internal_api dns_name: overcloud - network: storage dns_name: overcloud - network: external dns_name: overcloud ip_address: <vip_address> - network: ctlplane dns_name: overcloud - network: storage_nfs dns_name: overcloud ip_address: <vip_address>",
"cp /usr/share/ansible/roles/tripleo_network_config/templates/single_nic_vlans/single_nic_vlans.j2 /home/stack/templates/",
"--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}",
"- name: CephStorage count: 3 defaults: networks: - network: storage - network: storage_mgmt - network: storage_backup network_config: template: /home/stack/templates/single_nic_vlans.j2",
"- name: ControllerStorageNfs count: 3 hostname_format: controller-%index% instances: - hostname: controller-0 name: controller-0 - hostname: controller-1 name: controller-1 - hostname: controller-2 name: controller-2 defaults: profile: control network_config: template: /home/stack/templates/single_nic_vlans.j2 networks: - network: ctlplane vif: true - network: external - network: internal_api - network: storage - network: storage_mgmt - network: tenant - network: storage_nfs",
"(undercloud)USD openstack overcloud network provision --output <deployment_file> /home/stack/templates/<networks_definition_file>.yaml",
"(overcloud)USD openstack overcloud network vip provision --stack <stack> --output <deployment_file> /home/stack/templates/vip_data.yaml",
"cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/templates/roles_data.yaml",
"- name: CephStorage description: | Ceph OSD Storage node role networks: Storage subnet: storage_subnet StorageMgmt subnet: storage_mgmt_subnet StorageBackup subnet: storage_backup_subnet",
"- name: Controller description: | Controller role that has all the controller services loaded, handles Database, Messaging and Network functions, and additionally runs a ganesha service as a CephFS to NFS gateway. The gateway serves NFS exports via a VIP on a new isolated StorageNFS network. # ganesha service should always be deployed in HA configuration. CountDefault: 3 tags: - primary - controller networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet StorageNFS: subnet: storage_nfs_subnet",
"parameter_defaults: ServiceNetMap: SwiftStorageNetwork: storage_backup CephClusterNetwork: storage_backup",
"parameter_defaults: ServiceNetMap: GaneshaNetwork: <manila_nfs_network>",
"source ~/stackrc",
"openstack overcloud network provision --output overcloud-networks-deployed.yaml custom_network_data.yaml",
"openstack overcloud network vip provision --stack overcloud --output overcloud-networks-vips-deployed.yaml custom_vip_data.yaml",
"openstack overcloud node provision --stack overcloud --output overcloud-baremetal-deployed.yaml overcloud-baremetal-deploy.yaml",
"openstack overcloud deploy --templates --networks-file network_data_v2.yaml -e overcloud-networks-deployed.yaml -e overcloud-networks-vips-deployed.yaml -e overcloud-baremetal-deployed.yaml -e custom-net-single-nic-with-vlans.yaml",
"- name: InternalApi name_lower: MyCustomInternalApi",
"- name: InternalApi name_lower: MyCustomInternalApi service_net_map_replace: internal_api",
"No default route on this DHCP interface - type: interface name: nic2 use_dhcp: true defroute: false Instead use this DHCP interface as the default route - type: interface name: nic3 use_dhcp: true",
"- type: vlan device: bond1 vlan_id: 9 addresses: - ip_netmask: 172.17.0.100/16 routes: - ip_netmask: 10.1.2.0/24 next_hop: 172.17.0.1",
"network_config: - type: interface name: em1 use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr}} routes: - default: true next_hop: {{ external_gateway_ip }} - ip_netmask: {{ external_ip }}/{{ external_cidr}} next_hop: {{ external_gateway_ip }} table: 2 route_options: metric 100 rules: - rule: \"iif em1 table 200\" comment: \"Route incoming traffic to em1 with table 200\" - rule: \"from 192.0.2.0/24 table 200\" comment: \"Route all traffic from 192.0.2.0/24 with table 200\" - rule: \"add blackhole from 172.19.40.0/24 table 200\" - rule: \"add unreachable iif em1 from 192.168.1.0/24\"",
"openstack overcloud deploy --templates -e /home/stack/templates/<custom-nic-template> -e <OTHER_ENVIRONMENT_FILES>",
"cat /etc/iproute2/rt_tables ip route ip rule",
"--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: bridge_name mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ [ctlplane_host_routes] | flatten | unique }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} primary: true - type: vlan mtu: 9000 1 vlan_id: {{ storage_vlan_id }} addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ [storage_host_routes] | flatten | unique }} - type: vlan mtu: {{ storage_mgmt_mtu }} 2 vlan_id: {{ storage_mgmt_vlan_id }} addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ [storage_mgmt_host_routes] | flatten | unique }} - type: vlan mtu: {{ internal_api_mtu }} vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }} routes: {{ [internal_api_host_routes] | flatten | unique }} - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ [tenant_host_routes] | flatten | unique }} - type: vlan mtu: {{ external_mtu }} vlan_id: {{ external_vlan_id }} addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ [external_host_routes, [{'default': True, 'next_hop': external_gateway_ip}]] | flatten | unique }}",
"ovn_emit_need_to_frag = True",
"network_config: - type: ovs_bridge name: br-ex addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: - type: ovs_bond name: bond1 ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic3 primary: true - type: interface name: nic4",
"source ~/stackrc",
"vi /home/stack/templates/custom-environment.yaml",
"parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000",
"parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>",
"parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000",
"openstack overcloud deploy --templates -e /home/stack/templates/custom-environment.yaml",
"- type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5",
"parameter_defaults: BondInterfaceOvsOptions: \"bond_mode=active-backup\"",
"ovs-vsctl set port <bond port> other_config:lb-output-action=true",
"- type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: {{ bond_interface_ovs_options }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }}",
". - type: linux_bond name: bond_api mtu: {{ min_viable_mtu_ctlplane }} use_dhcp: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu_ctlplane }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu_ctlplane }} - type: vlan mtu: {{ internal_api_mtu }} vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }} routes: {{ internal_api_host_routes }}",
"- type: linux_bond name: bond_tenant mtu: {{ min_viable_mtu_ctlplane }} bonding_options: \"mode=802.3ad updelay=1000 miimon=100\" use_dhcp: false dns_servers: {{ ctlplane_dns_nameserver }} domain: {{ dns_search_domains }} members: - type: interface name: p1p1 mtu: {{ min_viable_mtu_ctlplane }} - type: interface name: p1p2 mtu: {{ min_viable_mtu_ctlplane }} - type: vlan mtu: {{ tenant_mtu }} vlan_id: {{ tenant_vlan_id }} addresses: - ip_netmask: {{ tenant_ip }}/{{ tenant_cidr }} routes: {{ tenant_host_routes }}",
"python3 convert_v1_net_data.py <network_config>.yaml",
"[stack@director ~]USD source ~/stackrc",
"cp /usr/share/openstack-tripleo-heat-templates/tools/convert_heat_nic_config_to_ansible_j2.py .",
"python3 convert_heat_nic_config_to_ansible_j2.py [--stack <overcloud> | --standalone] --networks_file <network_config.yaml> <network_template>.yaml",
"parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'",
"resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml",
"{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %}",
"vlan_id: {{ internal_api_vlan_id }}",
"- type: vlan device: nic2 vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet",
"- type: vlan device: nic2 vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }}",
"{% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {%- endfor %}",
"parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'",
"resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml",
"vlan_id: {{ internal_api_vlan_id }}",
"parameter_defaults: ControllerExtraGroupVars: storage_supernet: 172.16.0.0/16",
"{{ lookup('vars', networks_lower[network] ~ '_vlan_id') }}`",
"- name: <network_name> subnets: subnet01: ip_subnet: 172.16.1.0/24",
"- name: <network_name> subnets: subnet01: ipv6_subnet: 2001:db8:a::/64",
"- name: <network_name> subnets: subnet01: vlan: <vlan_id>",
"- name: <network_name> subnets: subnet01: vlan: <vlan_id>",
"- name: <network_name> mtu:",
"- name: <network_name> mtu:",
"- name: <network_name> subnets: subnet01: ip_subnet: 172.16.16.0/24 gateway_ip: 172.16.16.1",
"- name: <network_name> subnets: subnet01: ipv6_subnet: 2001:db8:a::/64 gateway_ipv6: 2001:db8:a::1",
"- name: <network_name> subnets: subnet01: routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.254",
"- name: <network_name> subnets: subnet01: routes_ipv6: - destination: 2001:db8:b::/64 nexthop: 2001:db8:a::1"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/customizing_your_red_hat_openstack_platform_deployment/assembly_customizing-networks-for-the-RHOSP-environment |
Chapter 5. Deploying AMQ Streams using installation artifacts | Chapter 5. Deploying AMQ Streams using installation artifacts As an alternative to using the OperatorHub to deploy AMQ Streams using the AMQ Streams Operator, you can use the installation artifacts. Having prepared your environment for a deployment of AMQ Streams , this section shows: How to create the Kafka cluster Optional procedures to deploy other Kafka components according to your requirements: Kafka Connect Kafka MirrorMaker Kafka Bridge The procedures assume an OpenShift cluster is available and running. AMQ Streams is based on AMQ Streams Strimzi 0.22.x. This section describes the procedures to deploy AMQ Streams on OpenShift 4.6 and later. Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 5.1. Create the Kafka cluster In order to create your Kafka cluster, you deploy the Cluster Operator to manage the Kafka cluster, then deploy the Kafka cluster. When deploying the Kafka cluster using the Kafka resource, you can deploy the Topic Operator and User Operator at the same time. Alternatively, if you are using a non-AMQ Streams Kafka cluster, you can deploy the Topic Operator and User Operator as standalone components. Deploying a Kafka cluster with the Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster managed by AMQ Streams. Deploy the Cluster Operator Use the Cluster Operator to deploy the: Kafka cluster Topic Operator User Operator Deploying a standalone Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams. Deploy the standalone Topic Operator Deploy the standalone User Operator 5.1.1. Deploying the Cluster Operator The Cluster Operator is responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster. The procedures in this section show: How to deploy the Cluster Operator to watch : A single namespace Multiple namespaces All namespaces Alternative deployment options: 5.1.1.1. Watch options for a Cluster Operator deployment When the Cluster Operator is running, it starts to watch for updates of Kafka resources. You can choose to deploy the Cluster Operator to watch Kafka resources from: A single namespace (the same namespace containing the Cluster Operator) Multiple namespaces All namespaces Note AMQ Streams provides example YAML files to make the deployment process easier. The Cluster Operator watches for changes to the following resources: Kafka for the Kafka cluster. KafkaConnect for the Kafka Connect cluster. KafkaConnectS2I for the Kafka Connect cluster with Source2Image support. KafkaConnector for creating and managing connectors in a Kafka Connect cluster. KafkaMirrorMaker for the Kafka MirrorMaker instance. KafkaBridge for the Kafka Bridge instance When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps. Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource. Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption. When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources. 5.1.1.2. Deploying the Cluster Operator to watch a single namespace This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 5.1.1.3. Deploying the Cluster Operator to watch multiple namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable. For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1 , watched-namespace-2 , watched-namespace-3 . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3 For each namespace listed, install the RoleBindings . In this example, we replace watched-namespace in these commands with the namespaces listed in the step, repeating them for watched-namespace-1 , watched-namespace-2 , watched-namespace-3 : oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace oc create -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace Deploy the Cluster Operator: oc create -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 5.1.1.4. Deploying the Cluster Operator to watch all namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to * . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ... Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator. oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator Replace my-cluster-operator-namespace with the namespace you want to install the Cluster Operator into. Deploy the Cluster Operator to your OpenShift cluster. oc create -f install/cluster-operator -n my-cluster-operator-namespace Verify that the Cluster Operator was successfully deployed: oc get deployments 5.1.2. Deploying Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. The procedures in this section show: How to use the Cluster Operator to deploy: An ephemeral or persistent Kafka cluster The Topic Operator and User Operator by configuring the Kafka custom resource: Topic Operator User Operator Alternative standalone deployment procedures for the Topic Operator and User Operator: Deploy the standalone Topic Operator Deploy the standalone User Operator When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper. 5.1.2.1. Deploying the Kafka cluster This procedure shows how to deploy a Kafka cluster to your OpenShift using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a Kafka resource. AMQ Streams provides example YAMLs files for deployment in examples/kafka/ : kafka-persistent.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes. kafka-jbod.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes). kafka-persistent-single.yaml Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node. kafka-ephemeral.yaml Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes. kafka-ephemeral-single.yaml Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node. In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment: Ephemeral cluster In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. Persistent cluster A persistent Kafka cluster uses PersistentVolumes to store ZooKeeper and Kafka data. The PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume . For example, it can use Amazon EBS volumes in Amazon AWS deployments without any changes in the YAML files. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. Updates to these properties are required when upgrading Kafka . The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file. Default cluster name and specified Kafka versions apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 2.7.0 #... config: #... log.message.format.version: 2.7 inter.broker.protocol.version: 2.7 # ... For more information about configuring the Kafka resource, see Kafka cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Create and deploy an ephemeral or persistent cluster. For development or testing, you might prefer to use an ephemeral cluster. You can use a persistent cluster in any situation. To create and deploy an ephemeral cluster: oc apply -f examples/kafka/kafka-ephemeral.yaml To create and deploy a persistent cluster: oc apply -f examples/kafka/kafka-persistent.yaml Verify that the Kafka cluster was successfully deployed: oc get deployments 5.1.2.2. Deploying the Topic Operator using the Cluster Operator This procedure describes how to deploy the Topic Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the topicOperator . If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component . For more information about configuring the entityOperator and topicOperator properties, see Configuring the Entity Operator in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include topicOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the Topic Operator spec using the properties described in EntityTopicOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: Use oc apply : oc apply -f <your-file> 5.1.2.3. Deploying the User Operator using the Cluster Operator This procedure describes how to deploy the User Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the userOperator . If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component . For more information about configuring the entityOperator and userOperator properties, see Configuring the Entity Operator in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include userOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference in the Using AMQ Streams on OpenShift guide. Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <your-file> 5.1.3. Alternative standalone deployment options for AMQ Streams Operators When deploying a Kafka cluster using the Cluster Operator, you can also deploy the Topic Operator and User Operator. Alternatively, you can perform a standalone deployment. A standalone deployment means the Topic Operator and User Operator can operate with a Kafka cluster that is not managed by AMQ Streams. 5.1.3.1. Deploying the standalone Topic Operator This procedure shows how to deploy the Topic Operator as a standalone component. A standalone deployment requires configuration of environment variables, and is more complicated than deploying the Topic Operator using the Cluster Operator . However, a standalone deployment is more flexible as the Topic Operator can operate with any Kafka cluster, not necessarily one deployed by the Cluster Operator. Prerequisites You need an existing Kafka cluster for the Topic Operator to connect to. Procedure Edit the Deployment.spec.template.spec.containers[0].env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml file by setting: STRIMZI_KAFKA_BOOTSTRAP_SERVERS to list the bootstrap brokers in your Kafka cluster, given as a comma-separated list of hostname : port pairs. STRIMZI_ZOOKEEPER_CONNECT to list the ZooKeeper nodes, given as a comma-separated list of hostname : port pairs. This should be the same ZooKeeper cluster that your Kafka cluster is using. STRIMZI_NAMESPACE to the OpenShift namespace in which you want the operator to watch for KafkaTopic resources. STRIMZI_RESOURCE_LABELS to the label selector used to identify the KafkaTopic resources managed by the operator. STRIMZI_FULL_RECONCILIATION_INTERVAL_MS to specify the interval between periodic reconciliations, in milliseconds. STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS to specify the number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation could take more time due to the number of partitions or replicas. Default 6 . STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS to the ZooKeeper session timeout, in milliseconds. For example, 10000 . Default 20000 (20 seconds). STRIMZI_TOPICS_PATH to the Zookeeper node path where the Topic Operator stores its metadata. Default /strimzi/topics . STRIMZI_TLS_ENABLED to enable TLS support for encrypting the communication with Kafka brokers. Default true . STRIMZI_TRUSTSTORE_LOCATION to the path to the truststore containing certificates for enabling TLS based communication. Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_TRUSTSTORE_PASSWORD to the password for accessing the truststore defined by STRIMZI_TRUSTSTORE_LOCATION . Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_KEYSTORE_LOCATION to the path to the keystore containing private keys for enabling TLS based communication. Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_KEYSTORE_PASSWORD to the password for accessing the keystore defined by STRIMZI_KEYSTORE_LOCATION . Mandatory only if TLS is enabled through STRIMZI_TLS_ENABLED . STRIMZI_LOG_LEVEL to the level for printing logging messages. The value can be set to: ERROR , WARNING , INFO , DEBUG , and TRACE . Default INFO . STRIMZI_JAVA_OPTS (optional) to the Java options used for the JVM running the Topic Operator. An example is -Xmx=512M -Xms=256M . STRIMZI_JAVA_SYSTEM_PROPERTIES (optional) to list the -D options which are set to the Topic Operator. An example is -Djavax.net.debug=verbose -DpropertyName=value . Deploy the Topic Operator: oc create -f install/topic-operator Verify that the Topic Operator has been deployed successfully: oc describe deployment strimzi-topic-operator The Topic Operator is deployed when the Replicas: entry shows 1 available . Note You may experience a delay with the deployment if you have a slow connection to the OpenShift cluster and the images have not been downloaded before. 5.1.3.2. Deploying the standalone User Operator This procedure shows how to deploy the User Operator as a standalone component. A standalone deployment requires configuration of environment variables, and is more complicated than deploying the User Operator using the Cluster Operator . However, a standalone deployment is more flexible as the User Operator can operate with any Kafka cluster, not necessarily one deployed by the Cluster Operator. Prerequisites You need an existing Kafka cluster for the User Operator to connect to. Procedure Edit the following Deployment.spec.template.spec.containers[0].env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml file by setting: STRIMZI_KAFKA_BOOTSTRAP_SERVERS to list the Kafka brokers, given as a comma-separated list of hostname : port pairs. STRIMZI_ZOOKEEPER_CONNECT to list the ZooKeeper nodes, given as a comma-separated list of hostname : port pairs. This must be the same ZooKeeper cluster that your Kafka cluster is using. Connecting to ZooKeeper nodes with TLS encryption is not supported. STRIMZI_NAMESPACE to the OpenShift namespace in which you want the operator to watch for KafkaUser resources. STRIMZI_LABELS to the label selector used to identify the KafkaUser resources managed by the operator. STRIMZI_FULL_RECONCILIATION_INTERVAL_MS to specify the interval between periodic reconciliations, in milliseconds. STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS to the ZooKeeper session timeout, in milliseconds. For example, 10000 . Default 20000 (20 seconds). STRIMZI_CA_CERT_NAME to point to an OpenShift Secret that contains the public key of the Certificate Authority for signing new user certificates for TLS client authentication. The Secret must contain the public key of the Certificate Authority under the key ca.crt . STRIMZI_CA_KEY_NAME to point to an OpenShift Secret that contains the private key of the Certificate Authority for signing new user certificates for TLS client authentication. The Secret must contain the private key of the Certificate Authority under the key ca.key . STRIMZI_CLUSTER_CA_CERT_SECRET_NAME to point to an OpenShift Secret containing the public key of the Certificate Authority used for signing Kafka brokers certificates for enabling TLS-based communication. The Secret must contain the public key of the Certificate Authority under the key ca.crt . This environment variable is optional and should be set only if the communication with the Kafka cluster is TLS based. STRIMZI_EO_KEY_SECRET_NAME to point to an OpenShift Secret containing the private key and related certificate for TLS client authentication against the Kafka cluster. The Secret must contain the keystore with the private key and certificate under the key entity-operator.p12 , and the related password under the key entity-operator.password . This environment variable is optional and should be set only if TLS client authentication is needed when the communication with the Kafka cluster is TLS based. STRIMZI_CA_VALIDITY the validity period for the Certificate Authority. Default is 365 days. STRIMZI_CA_RENEWAL the renewal period for the Certificate Authority. STRIMZI_LOG_LEVEL to the level for printing logging messages. The value can be set to: ERROR , WARNING , INFO , DEBUG , and TRACE . Default INFO . STRIMZI_GC_LOG_ENABLED to enable garbage collection (GC) logging. Default true . Default is 30 days to initiate certificate renewal before the old certificates expire. STRIMZI_JAVA_OPTS (optional) to the Java options used for the JVM running User Operator. An example is -Xmx=512M -Xms=256M . STRIMZI_JAVA_SYSTEM_PROPERTIES (optional) to list the -D options which are set to the User Operator. An example is -Djavax.net.debug=verbose -DpropertyName=value . Deploy the User Operator: oc create -f install/user-operator Verify that the User Operator has been deployed successfully: oc describe deployment strimzi-user-operator The User Operator is deployed when the Replicas: entry shows 1 available . Note You may experience a delay with the deployment if you have a slow connection to the OpenShift cluster and the images have not been downloaded before. 5.2. Deploy Kafka Connect Kafka Connect is a tool for streaming data between Apache Kafka and external systems. In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams. Using the concept of connectors , Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with external databases and storage and messaging systems. The procedures in this section show how to: Deploy a Kafka Connect cluster using a KafkaConnect resource Run multiple Kafka Connect instances Create a Kafka Connect image containing the connectors you need to make your connection Create and manage connectors using a KafkaConnector resource or the Kafka Connect REST API Deploy a KafkaConnector resource to Kafka Connect Restart a Kafka connector by annotating a KafkaConnector resource Restart a Kafka connector task by annotating a KafkaConnector resource Note The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context. 5.2.1. Deploying Kafka Connect to your OpenShift cluster This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator. A Kafka Connect cluster is implemented as a Deployment with a configurable number of nodes (also called workers ) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable. The deployment uses a YAML file to provide the specification to create a KafkaConnect resource. In this procedure, we use the example file provided with AMQ Streams: examples/connect/kafka-connect.yaml For information about configuring the KafkaConnect resource (or the KafkaConnectS2I resource with Source-to-Image (S2I) support), see Kafka Connect cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Running Kafka cluster. Procedure Deploy Kafka Connect to your OpenShift cluster. For a Kafka cluster with 3 or more brokers, use the examples/connect/kafka-connect.yaml file. For a Kafka cluster with less than 3 brokers, use the examples/connect/kafka-connect-single-node-kafka.yaml file. oc apply -f examples/connect/kafka-connect.yaml Verify that Kafka Connect was successfully deployed: oc get deployments 5.2.2. Kafka Connect configuration for multiple instances If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ... 1 Kafka Connect cluster group that the instance belongs to. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all Kafka Connect instances with the same group.id . Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics. If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors. If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance. 5.2.3. Extending Kafka Connect with connector plug-ins The AMQ Streams container images for Kafka Connect include two built-in file connectors for moving file-based data into and out of your Kafka cluster. Table 5.1. File connectors File Connector Description FileStreamSourceConnector Transfers data to your Kafka cluster from a file (the source). FileStreamSinkConnector Transfers data from your Kafka cluster to a file (the sink). The procedures in this section show how to add your own connector classes to connector images by: Creating a new container image automatically using AMQ Streams Creating a container image from the Kafka Connect base image (manually or using continuous integration) Creating a container image using OpenShift builds and Source-to-Image (S2I) (available only on OpenShift) Important You create the configuration for connectors directly using the Kafka Connect REST API or KafkaConnector custom resources . 5.2.3.1. Creating a new container image automatically using AMQ Streams This procedure shows how to configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins property of the KafkaConnect custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output and automatically used in the Kafka Connect deployment. Prerequisites The Cluster Operator must be deployed. A container registry. You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub . Procedure Configure the KafkaConnect custom resource by specifying the container registry in .spec.build.output , and additional connectors in .spec.build.plugins : apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #... 1 The specification for the Kafka Connect cluster . 2 (Required) Configuration of the container registry where new images are pushed. 3 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . Create or update the resource: Wait for the new container image to build, and for the Kafka Connect cluster to be deployed. Use the Kafka Connect REST API or the KafkaConnector custom resources to use the connector plugins you added. Additional resources See the Using AMQ Streams on OpenShift guide for more information on: Kafka Connect Build schema reference 5.2.3.2. Creating a Docker image from the Kafka Connect base image This procedure shows how to create a custom image and add it to the /opt/kafka/plugins directory. You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plug-ins. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plug-ins contained in the /opt/kafka/plugins directory. Prerequisites The Cluster Operator must be deployed. Procedure Create a new Dockerfile using registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 as the base image: Example plug-in file Build the container image. Push your custom image to your container registry. Point to the new container image. You can either: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES variable in the Cluster Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #... 1 The specification for the Kafka Connect cluster . 2 The docker image for the pods. 3 Configuration of the Kafka Connect workers (not connectors). or In the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_KAFKA_CONNECT_IMAGES variable to point to the new container image, and then reinstall the Cluster Operator. Additional resources See the Using AMQ Streams on OpenShift guide for more information on: Container image configuration and the KafkaConnect.spec.image property Cluster Operator configuration and the STRIMZI_KAFKA_CONNECT_IMAGES variable 5.2.3.3. Creating a container image using OpenShift builds and Source-to-Image This procedure shows how to use OpenShift builds and the Source-to-Image (S2I) framework to create a new container image. An OpenShift build takes a builder image with S2I support, together with source code and binaries provided by the user, and uses them to build a new container image. Once built, container images are stored in OpenShift's local container image repository and are available for use in deployments. A Kafka Connect builder image with S2I support is provided on the Red Hat Ecosystem Catalog as part of the registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 image. This S2I image takes your binaries (with plug-ins and connectors) and stores them in the /tmp/kafka-plugins/s2i directory. It creates a new Kafka Connect image from this directory, which can then be used with the Kafka Connect deployment. When started using the enhanced image, Kafka Connect loads any third-party plug-ins from the /tmp/kafka-plugins/s2i directory. Important With the introduction of build configuration to the KafkaConnect resource, AMQ Streams can now automatically build a container image with the connector plugins you require for your data connections. As a result, support for Kafka Connect with Source-to-Image (S2I) is deprecated. To prepare for this change, you can migrate Kafka Connect S2I instances to Kafka Connect instances . Procedure On the command line, use the oc apply command to create and deploy a Kafka Connect S2I cluster: oc apply -f examples/connect/kafka-connect-s2i.yaml Create a directory with Kafka Connect plug-ins: Use the oc start-build command to start a new build of the image using the prepared directory: oc start-build my-connect-cluster-connect --from-dir ./ my-plugins / Note The name of the build is the same as the name of the deployed Kafka Connect cluster. When the build has finished, the new image is used automatically by the Kafka Connect deployment. 5.2.4. Creating and managing connectors When you have created a container image for your connector plug-in, you need to create a connector instance in your Kafka Connect cluster. You can then configure, monitor, and manage a running connector instance. A connector is an instance of a particular connector class that knows how to communicate with the relevant external system in terms of messages. Connectors are available for many external systems, or you can create your own. You can create source and sink types of connector. Source connector A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. Sink connector A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. AMQ Streams provides two APIs for creating and managing connectors: KafkaConnector resources (referred to as KafkaConnectors) Kafka Connect REST API Using the APIs, you can: Check the status of a connector instance Reconfigure a running connector Increase or decrease the number of connector tasks for a connector instance Restart connectors Restart connector tasks, including failed tasks Pause a connector instance Resume a previously paused connector instance Delete a connector instance 5.2.4.1. KafkaConnector resources KafkaConnectors allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way, so an HTTP client such as cURL is not required. Like other Kafka resources, you declare a connector's desired state in a KafkaConnector YAML file that is deployed to your OpenShift cluster to create the connector instance. KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. You manage a running connector instance by updating its corresponding KafkaConnector resource, and then applying the updates. Annotations are used to manually restart connector instances and connector tasks. You remove a connector by deleting its corresponding KafkaConnector. To ensure compatibility with earlier versions of AMQ Streams, KafkaConnectors are disabled by default. To enable them for a Kafka Connect cluster, you must use annotations on the KafkaConnect resource. For instructions, see Configuring Kafka Connect in the Using AMQ Streams on OpenShift guide. When KafkaConnectors are enabled, the Cluster Operator begins to watch for them. It updates the configurations of running connector instances to match the configurations defined in their KafkaConnectors. AMQ Streams includes an example KafkaConnector , named examples/connect/source-connector.yaml . You can use this example to create and manage a FileStreamSourceConnector and a FileStreamSinkConnector as described in Section 5.2.5, "Deploying the example KafkaConnector resources" . 5.2.4.2. Availability of the Kafka Connect REST API The Kafka Connect REST API is available on port 8083 as the <connect-cluster-name>-connect-api service. If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. The operations supported by the REST API are described in the Apache Kafka documentation . 5.2.5. Deploying the example KafkaConnector resources AMQ Streams includes an example KafkaConnector in examples/connect/source-connector.yaml . This creates a basic FileStreamSourceConnector instance that sends each line of the Kafka license file (an example file source) to a single Kafka topic. This procedure describes how to create: A FileStreamSourceConnector that reads data from the Kafka license file (the source) and writes the data as messages to a Kafka topic. A FileStreamSinkConnector that reads messages from the Kafka topic and writes the messages to a temporary file (the sink). Note In a production environment, you prepare container images containing your desired Kafka Connect connectors, as described in Section 5.2.3, "Extending Kafka Connect with connector plug-ins" . The FileStreamSourceConnector and FileStreamSinkConnector are provided as examples. Running these connectors in containers as described here is unlikely to be suitable for production use cases. Prerequisites A Kafka Connect deployment KafkaConnectors are enabled in the Kafka Connect deployment The Cluster Operator is running Procedure Edit the examples/connect/source-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: "/opt/kafka/LICENSE" 6 topic: my-topic 7 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect Tasks that the connector can create. 5 Connector configuration as key-value pairs. 6 This example source connector configuration reads data from the /opt/kafka/LICENSE file. 7 Kafka topic to publish the source data to. Create the source KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/source-connector.yaml Create an examples/connect/sink-connector.yaml file: touch examples/connect/sink-connector.yaml Paste the following YAML into the sink-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4 1 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 2 Connector configuration as key-value pairs. 3 Temporary file to publish the source data to. 4 Kafka topic to read the source data from. Create the sink KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/sink-connector.yaml Check that the connector resources were created: oc get kctr --selector strimzi.io/cluster= MY-CONNECT-CLUSTER -o name my-source-connector my-sink-connector Replace MY-CONNECT-CLUSTER with your Kafka Connect cluster. In the container, execute kafka-console-consumer.sh to read the messages that were written to the topic by the source connector: oc exec MY-CLUSTER -kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server MY-CLUSTER -kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning Source and sink connector configuration options The connector configuration is defined in the spec.config property of the KafkaConnector resource. The FileStreamSourceConnector and FileStreamSinkConnector classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options. Table 5.2. Configuration options for the FileStreamSource connector class Name Type Default value Description file String Null Source file to write messages to. If not specified, the standard input is used. topic List Null The Kafka topic to publish data to. Table 5.3. Configuration options for FileStreamSinkConnector class Name Type Default value Description file String Null Destination file to write messages to. If not specified, the standard output is used. topics List Null One or more Kafka topics to read data from. topics.regex String Null A regular expression matching one or more Kafka topics to read data from. Additional resources Section 5.2.4, "Creating and managing connectors" 5.2.6. Performing a restart of a Kafka connector This procedure describes how to manually trigger a restart of a Kafka connector by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector you want to restart: oc get KafkaConnector To restart the connector, annotate the KafkaConnector resource in OpenShift. For example, using oc annotate : oc annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart=true Wait for the reconciliation to occur (every two minutes by default). The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. Additional resources Creating and managing connectors in the Deploying and Upgrading guide. 5.2.7. Performing a restart of a Kafka connector task This procedure describes how to manually trigger a restart of a Kafka connector task by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector task you want to restart: oc get KafkaConnector Find the ID of the task to be restarted from the KafkaConnector custom resource. Task IDs are non-negative integers, starting from 0. oc describe KafkaConnector KAFKACONNECTOR-NAME To restart the connector task, annotate the KafkaConnector resource in OpenShift. For example, using oc annotate to restart task 0: oc annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart-task=0 Wait for the reconciliation to occur (every two minutes by default). The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. Additional resources Creating and managing connectors in the Deploying and Upgrading guide. 5.3. Deploy Kafka MirrorMaker The Cluster Operator deploys one or more Kafka MirrorMaker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. MirrorMaker consumes messages from the source cluster and republishes those messages to the target cluster. 5.3.1. Deploying Kafka MirrorMaker to your OpenShift cluster This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. In this procedure, we use the example files provided with AMQ Streams: examples/mirror-maker/kafka-mirror-maker.yaml examples/mirror-maker/kafka-mirror-maker-2.yaml For information about configuring KafkaMirrorMaker or KafkaMirrorMaker2 resources, see Kafka MirrorMaker cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka MirrorMaker to your OpenShift cluster: For MirrorMaker: oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml For MirrorMaker 2.0: oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml Verify that MirrorMaker was successfully deployed: oc get deployments 5.4. Deploy Kafka Bridge The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API. 5.4.1. Deploying Kafka Bridge to your OpenShift cluster This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaBridge resource. In this procedure, we use the example file provided with AMQ Streams: examples/bridge/kafka-bridge.yaml For information about configuring the KafkaBridge resource, see Kafka Bridge cluster configuration in the Using AMQ Streams on OpenShift guide. Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml Verify that Kafka Bridge was successfully deployed: oc get deployments | [
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3",
"create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace create -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #",
"create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-cluster-operator-namespace :strimzi-cluster-operator",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"get deployments",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 2.7.0 # config: # log.message.format.version: 2.7 inter.broker.protocol.version: 2.7 #",
"apply -f examples/kafka/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-persistent.yaml",
"get deployments",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"create -f install/topic-operator",
"describe deployment strimzi-topic-operator",
"create -f install/user-operator",
"describe deployment strimzi-user-operator",
"apply -f examples/connect/kafka-connect.yaml",
"get deployments",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #",
"oc apply -f KAFKA-CONNECT-CONFIG-FILE",
"FROM registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001",
"tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-3.4.2.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mongodb-driver-3.4.2.jar │ ├── mongodb-driver-core-3.4.2.jar │ └── README.md ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-0.13.0.jar │ ├── mysql-connector-java-5.1.40.jar │ ├── README.md │ └── wkb-1.0.2.jar └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-0.7.1.jar ├── debezium-core-0.7.1.jar ├── LICENSE.txt ├── postgresql-42.0.0.jar ├── protobuf-java-2.6.1.jar └── README.md",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #",
"apply -f examples/connect/kafka-connect-s2i.yaml",
"tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-3.4.2.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mongodb-driver-3.4.2.jar │ ├── mongodb-driver-core-3.4.2.jar │ └── README.md ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-0.13.0.jar │ ├── mysql-connector-java-5.1.40.jar │ ├── README.md │ └── wkb-1.0.2.jar └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-0.7.1.jar ├── debezium-core-0.7.1.jar ├── LICENSE.txt ├── postgresql-42.0.0.jar ├── protobuf-java-2.6.1.jar └── README.md",
"start-build my-connect-cluster-connect --from-dir ./ my-plugins /",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: \"/opt/kafka/LICENSE\" 6 topic: my-topic 7 #",
"apply -f examples/connect/source-connector.yaml",
"touch examples/connect/sink-connector.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4",
"apply -f examples/connect/sink-connector.yaml",
"get kctr --selector strimzi.io/cluster= MY-CONNECT-CLUSTER -o name my-source-connector my-sink-connector",
"exec MY-CLUSTER -kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server MY-CLUSTER -kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning",
"get KafkaConnector",
"annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart=true",
"get KafkaConnector",
"describe KafkaConnector KAFKACONNECTOR-NAME",
"annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart-task=0",
"apply -f examples/mirror-maker/kafka-mirror-maker.yaml",
"apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml",
"get deployments",
"apply -f examples/bridge/kafka-bridge.yaml",
"get deployments"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-tasks_str |
Chapter 3. Installing with the Assisted Installer UI | Chapter 3. Installing with the Assisted Installer UI After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster. 3.1. Pre-installation considerations Before installing OpenShift Container Platform with the Assisted Installer, you must consider the following configuration choices: Which base domain to use Which OpenShift Container Platform product version to install Whether to install a full cluster or single-node OpenShift Whether to use a DHCP server or a static network configuration Whether to use IPv4 or dual-stack networking Whether to install OpenShift Virtualization Whether to install Red Hat OpenShift Data Foundation Whether to install Multicluster Engine Whether to integrate with the platform when installing on vSphere or Nutanix Whether to install a mixed-cluster architecture Important If you intend to install any of the Operators, refer to the relevant hardware and storage requirements in Optional:Installing Operators . 3.2. Setting the cluster details To create a cluster with the Assisted Installer web user interface, use the following procedure. Procedure Log in to the Red Hat Hybrid Cloud Console . In the Red Hat OpenShift tile, click Scale your applications . In the menu, click Clusters . Click Create cluster . Click the Datacenter tab. Under Assisted Installer , click Create cluster . Enter a name for the cluster in the Cluster name field. Enter a base domain for the cluster in the Base domain field. All subdomains for the cluster will use this base domain. Note The base domain must be a valid DNS name. You must not have a wild card domain set up for the base domain. Select the version of OpenShift Container Platform to install. Important For IBM Power and IBM zSystems platforms, only OpenShift Container Platform version 4.13 and later is supported. For a mixed-architecture cluster installation, select OpenShift Container Platform version 4.12 or later, and use the -multi option. For instructions on installing a mixed-architecture cluster, see Additional resources . Optional: Select Install single node Openshift (SNO) if you want to install OpenShift Container Platform on a single node. Note Currently, SNO is not supported on IBM zSystems and IBM Power platforms. Optional: The Assisted Installer already has the pull secret associated to your account. If you want to use a different pull secret, select Edit pull secret . Optional: If you are installing OpenShift Container Platform on a third-party platform, select the platform from the Integrate with external parter platforms list. Valid values are Nutanix , vSphere or Oracle Cloud Infrastructure . Assisted Installer defaults to having no platform integration. Note For details on each of the external partner integrations, see Additional Resources . Important Assisted Installer supports Oracle Cloud Infrastructure (OCI) integration from OpenShift Container Platform 4.14 and later. For OpenShift Container Platform 4.14, the OCI integration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features - Scope of Support . Optional: Assisted Installer defaults to using x86_64 CPU architecture. If you are installing OpenShift Container Platform on a different architecture select the respective architecture to use. Valid values are arm64 , ppc64le , and s390x . Keep in mind, some features are not available with arm64 , ppc64le , and s390x CPU architectures. Important For a mixed-architecture cluster installation, use the default x86_64 architecture. For instructions on installing a mixed-architecture cluster, see Additional resources . Optional: The Assisted Installer defaults to DHCP networking. If you are using a static IP configuration, bridges or bonds for the cluster nodes instead of DHCP reservations, select Static IP, bridges, and bonds . Note A Static IP configuration is not supported for OpenShift Container Platform installations on Oracle Cloud Infrastructure (OCI). Optional: If you want to enable encryption of the installation disks, under Enable encryption of installation disks you can select Control plane node, worker for single-node OpenShift. For multi-node clusters, you can select Control plane nodes to encrypt the control plane node installation disks and select Workers to encrypt worker node installation disks. Important You cannot change the base domain, the SNO checkbox, the CPU architecture, the host's network configuration, or the disk-encryption after installation begins. Additional resources Optional: Installing on Nutanix Optional: Installing on vSphere 3.3. Optional: Configuring static networks The Assisted Installer supports IPv4 networking with SDN and OVN, and supports IPv6 and dual stack networking with OVN only. The Assisted Installer supports configuring the network with static network interfaces with IP address/MAC address mapping. The Assisted Installer also supports configuring host network interfaces with the NMState library, a declarative network manager API for hosts. You can use NMState to deploy hosts with static IP addressing, bonds, VLANs and other advanced networking features. First, you must set network-wide configurations. Then, you must create a host-specific configuration for each host. Note For installations on IBM Z with z/VM, ensure that the z/VM nodes and vSwitches are properly configured for static networks and NMState. Also, the z/VM nodes must have a fixed MAC address assigned as the pool MAC addresses might cause issues with NMState. Procedure Select the internet protocol version. Valid options are IPv4 and Dual stack . If the cluster hosts are on a shared VLAN, enter the VLAN ID. Enter the network-wide IP addresses. If you selected Dual stack networking, you must enter both IPv4 and IPv6 addresses. Enter the cluster network's IP address range in CIDR notation. Enter the default gateway IP address. Enter the DNS server IP address. Enter the host-specific configuration. If you are only setting a static IP address that uses a single network interface, use the form view to enter the IP address and the MAC address for each host. If you use multiple interfaces, bonding, or other advanced networking features, use the YAML view and enter the desired network state for each host that uses NMState syntax. Then, add the MAC address and interface name for each host interface used in your network configuration. Additional resources NMState version 2.1.4 3.4. Configuring Operators The Assisted Installer can install with certain Operators configured. The Operators include: OpenShift Virtualization Multicluster Engine (MCE) for Kubernetes OpenShift Data Foundation Logical Volume Manager (LVM) Storage Important For a detailed description of each of the Operators, together with hardware requirements, storage considerations, interdependencies, and additional installation instructions, see Additional Resources . This step is optional. You can complete the installation without selecting an Operator. Procedure To install OpenShift Virtualization, select Install OpenShift Virtualization . To install Multicluster Engine (MCE), select Install multicluster engine . To install OpenShift Data Foundation, select Install OpenShift Data Foundation . To install Logical Volume Manager, select Install Logical Volume Manager . Click to proceed to the step. Additional resources Installing the OpenShift Virtualization Operator Installing the Multicluster Engine (MCE) Operator Installing the OpenShift Data Foundation Operator 3.5. Adding hosts to the cluster You must add one or more hosts to the cluster. Adding a host to the cluster involves generating a discovery ISO. The discovery ISO runs Red Hat Enterprise Linux CoreOS (RHCOS) in-memory with an agent. Perform the following procedure for each host on the cluster. Procedure Click the Add hosts button and select the provisioning type. Select Minimal image file: Provision with virtual media to download a smaller image that will fetch the data needed to boot. The nodes must have virtual media capability. This is the recommended method for x86_64 and arm64 architectures. Select Full image file: Provision with physical media to download the larger full image. This is the recommended method for the ppc64le architecture and for the s390x architecture when installing with RHEL KVM. Select iPXE: Provision from your network server to boot the hosts using iPXE. Note If you install on RHEL KVM, in some circumstances, the VMs on the KVM host are not rebooted on first boot and need to be restarted manually. If you install OpenShift Container Platform on Oracle Cloud Infrastructure, select Minimal image file: Provision with virtual media only. Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Optional: Add an SSH public key so that you can connect to the cluster nodes as the core user. Having a login to the cluster nodes can provide you with debugging information during the installation. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy, or if the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates . Add additional certificates in X.509 format. Configure the discovery image if needed. Optional: If you are installing on a platform and want to integrate with the platform, select Integrate with your virtualization platform . You must boot all hosts and ensure they appear in the host inventory. All the hosts must be on the same platform. Click Generate Discovery ISO or Generate Script File . Download the discovery ISO or iPXE script. Boot the host(s) with the discovery image or iPXE script. Additional resources Configuring the discovery image for additional details. Booting hosts with the discovery image for additional details. Red Hat Enterprise Linux 9 - Configuring and managing virtualization for additional details. How to configure a VIOS Media Repository/Virtual Media Library for additional details. Adding hosts on Nutanix with the UI Adding hosts on vSphere 3.6. Configuring hosts After booting the hosts with the discovery ISO, the hosts will appear in the table at the bottom of the page. You can optionally configure the hostname and role for each host. You can also delete a host if necessary. Procedure From the Options (...) menu for a host, select Change hostname . If necessary, enter a new name for the host and click Change . You must ensure that each host has a valid and unique hostname. Alternatively, from the Actions list, select Change hostname to rename multiple selected hosts. In the Change Hostname dialog, type the new name and include {{n}} to make each hostname unique. Then click Change . Note You can see the new names appearing in the Preview pane as you type. The name will be identical for all selected hosts, with the exception of a single-digit increment per host. From the Options (...) menu, you can select Delete host to delete a host. Click Delete to confirm the deletion. Alternatively, from the Actions list, select Delete to delete multiple selected hosts at the same time. Then click Delete hosts . Note In a regular deployment, a cluster can have three or more hosts, and three of these must be control plane hosts. If you delete a host that is also a control plane, or if you are left with only two hosts, you will get a message saying that the system is not ready. To restore a host, you will need to reboot it from the discovery ISO. From the Options (...) menu for the host, optionally select View host events . The events in the list are presented chronologically. For multi-host clusters, in the Role column to the host name, you can click on the menu to change the role of the host. If you do not select a role, the Assisted Installer will assign the role automatically. The minimum hardware requirements for control plane nodes exceed that of worker nodes. If you assign a role to a host, ensure that you assign the control plane role to hosts that meet the minimum hardware requirements. Click the Status link to view hardware, network and operator validations for the host. Click the arrow to the left of a host name to expand the host details. Once all cluster hosts appear with a status of Ready , proceed to the step. 3.7. Configuring storage disks After discovering and configuring the cluster hosts, you can optionally configure the storage disks for each host. Any host configurations possible here are discussed in the Configuring Hosts section. See the additional resources below for the link. Procedure To the left of the checkbox to a host name, click to display the storage disks for that host. If there are multiple storage disks for a host, you can select a different disk to act as the installation disk. Click the Role dropdown list for the disk, and then select Installation disk . The role of the installation disk changes to None . All bootable disks are marked for reformatting during the installation by default, with the exception of read-only disks such as CDROMs. Deselect the Format checkbox to prevent a disk from being reformatted. The installation disk must be reformatted. Back up any sensitive data before proceeding. Once all disk drives appear with a status of Ready , proceed to the step. Additional resources Configuring hosts 3.8. Configuring networking Before installing OpenShift Container Platform, you must configure the cluster network. Procedure In the Networking page, select one of the following if it is not already selected for you: Cluster-Managed Networking: Selecting cluster-managed networking means that the Assisted Installer will configure a standard network topology, including keepalived and Virtual Router Redundancy Protocol (VRRP) for managing the API and Ingress VIP addresses. Note Currently, Cluster-Managed Networking is not supported on IBM zSystems and IBM Power in OpenShift Container Platform version 4.13. Oracle Cloud Infrastructure (OCI) is available for OpenShift Container Platform 4.14 with a user-managed networking configuration only. User-Managed Networking : Selecting user-managed networking allows you to deploy OpenShift Container Platform with a non-standard network topology. For example, if you want to deploy with an external load balancer instead of keepalived and VRRP, or if you intend to deploy the cluster nodes across many distinct L2 network segments. For cluster-managed networking, configure the following settings: Define the Machine network . You can use the default network or select a subnet. Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform. Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster. For user-managed networking, configure the following settings: Select your Networking stack type : IPv4 : Select this type when your hosts are only using IPv4. Dual-stack : You can select dual-stack when your hosts are using IPv4 together with IPv6. Define the Machine network . You can use the default network or select a subnet. Define an API virtual IP . An API virtual IP provides an endpoint for all users to interact with, and configure the platform. Define an Ingress virtual IP . An Ingress virtual IP provides an endpoint for application traffic flowing from outside the cluster. Optional: You can select Allocate IPs via DHCP server to automatically allocate the API IP and Ingress IP using the DHCP server. Optional: Select Use advanced networking to configure the following advanced networking properties: Cluster network CIDR : Define an IP address block from which Pod IP addresses are allocated. Cluster network host prefix : Define a subnet prefix length to assign to each node. Service network CIDR : Define an IP address to use for service IP addresses. Network type : Select either Software-Defined Networking (SDN) for standard networking or Open Virtual Networking (OVN) for IPv6, dual-stack networking, and telco features. In OpenShift Container Platform 4.12 and later releases, OVN is the default Container Network Interface (CNI). Additional resources Network configuration 3.9. Pre-installation validation The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex post-installation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass pre-installation validation. Additional resources Pre-installation validation 3.10. Installing the cluster After you have completed the configuration and all the nodes are Ready , you can begin installation. The installation process takes a considerable amount of time, and you can monitor the installation from the Assisted Installer web console. Nodes will reboot during the installation, and they will initialize after installation. Procedure Press Begin installation . Click on the link in the Status column of the Host Inventory list to see the installation status of a particular host. 3.11. Completing the installation After the cluster is installed and initialized, the Assisted Installer indicates that the installation is finished. The Assisted Installer provides the console URL, the kubeadmin username and password, and the kubeconfig file. Additionally, the Assisted Installer provides cluster details including the OpenShift Container Platform version, base domain, CPU architecture, API and Ingress IP addresses, and the cluster and service network IP addresses. Prerequisites You have installed the oc CLI tool. Procedure Make a copy of the kubeadmin username and password. Download the kubeconfig file and copy it to the auth directory under your working directory: USD mkdir -p <working_directory>/auth USD cp kubeadmin <working_directory>/auth Note The kubeconfig file is available for download for 24 hours after completing the installation. Add the kubeconfig file to your environment: USD export KUBECONFIG=<your working directory>/auth/kubeconfig Login with the oc CLI tool: USD oc login -u kubeadmin -p <password> Replace <password> with the password of the kubeadmin user. Click on the web console URL or click Launch OpenShift Console to open the console. Enter the kubeadmin username and password. Follow the instructions in the OpenShift Container Platform console to configure an identity provider and configure alert receivers. Add a bookmark of the OpenShift Container Platform console. Complete any post-installation platform integration steps. Additional resources Nutanix post-installation configuration vSphere post-installation configuration | [
"mkdir -p <working_directory>/auth",
"cp kubeadmin <working_directory>/auth",
"export KUBECONFIG=<your working directory>/auth/kubeconfig",
"oc login -u kubeadmin -p <password>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/installing-with-ui |
Chapter 12. Jakarta Persistence | Chapter 12. Jakarta Persistence 12.1. About Jakarta Persistence The Jakarta Persistence is a Jakarta EE specification for accessing, persisting, and managing data between Java objects or classes and a relational database. The Jakarta Persistence specification recognizes the interest and the success of the transparent object or relational mapping paradigm. It standardizes the basic APIs and the metadata needed for any object or relational persistence mechanism. Note Jakarta Persistence itself is just a specification, not a product; it cannot perform persistence or anything else by itself. Jakarta Persistence is just a set of interfaces, and requires an implementation. 12.2. Create a Simple JPA Application Follow the procedure below to create a simple JPA application in Red Hat CodeReady Studio. Procedure Create a JPA project in Red Hat CodeReady Studio. In Red Hat CodeReady Studio, click File New Project . Find JPA in the list, expand it, and select JPA Project . You are presented with the following dialog. Figure 12.1. New JPA Project Dialog Enter a Project name . Select a Target runtime . If no target runtime is available, follow these instructions to define a new server and runtime: Downloading, Installing, and Setting Up JBoss EAP from within the IDE in the Getting Started with CodeReady Studio Tools guide. Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Under JPA version , ensure 2.1 is selected. Under Configuration , choose Basic JPA Configuration . Click Finish . If prompted, choose whether you wish to associate this type of project with the JPA perspective window. Create and configure a new persistence settings file. Open an EJB 3.x project in Red Hat CodeReady Studio. Right click the project root directory in the Project Explorer panel. Select New Other... . Select XML File from the XML folder and click . Select the ejbModule/META-INF/ folder as the parent directory. Name the file persistence.xml and click . Select Create XML file from an XML schema file and click . Select http://java.sun.com/xml/ns/persistence/persistence_2.0.xsd from the Select XML Catalog entry list and click . Figure 12.2. Persistence XML Schema Click Finish to create the file. The persistence.xml has been created in the META-INF/ folder and is ready to be configured. Example: Persistence Settings File <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_2.xsd" version="2.2"> <persistence-unit name="example" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <mapping-file>ormap.xml</mapping-file> <jar-file>TestApp.jar</jar-file> <class>org.test.Test</class> <shared-cache-mode>NONE</shared-cache-mode> <validation-mode>CALLBACK</validation-mode> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence> 12.3. Jakarta Persistence Entities Once you have established the connection from your application to the database, you can start mapping the data in the database to Java objects. Java objects that are used to map against database tables are called entity objects. Entities have relationships with other entities, which are expressed through object-relational metadata. The object-relational metadata can be specified either directly in the entity class file by using annotations, or in an XML descriptor file called persistence.xml included with the application. The high-level mapping of Java objects to the database is as follows: Java classes map to the database tables. Java instances map to the database rows. Java fields map to the database columns. 12.4. Persistence Context The Jakarta Persistence persistence context contains the entities managed by the persistence provider. The persistence context acts like a first level transactional cache for interacting with the datasource. It manages the entity instances and their lifecycle. Loaded entities are placed into the persistence context before being returned to the application. Entity changes are also placed into the persistence context to be saved in the database when the transaction commits. The lifetime of a container-managed persistence context can either be scoped to a transaction, which is referred to as a transaction-scoped persistence context, or have a lifetime scope that extends beyond that of a single transaction, which is referred to as an extended persistence context. The PersistenceContextType property, which has the enum datatype, is used to define the persistence context lifetime scope for container-managed entity managers. The persistence context lifetime scope is defined when the EntityManager instance is created. 12.4.1. Transaction-Scoped Persistence Context The transaction-scoped persistence context works with the active Jakarta Transactions transaction. When the transaction commits, the persistence context is flushed to the datasource; the entity objects are detached but might still be referenced by the application code. All the entity changes that are expected to be saved to the datasource must be made during a transaction. Entities that are read outside the transaction are detached when the EntityManager invocation completes. 12.4.2. Extended Persistence Context The extended persistence context spans multiple transactions and allows data modifications to be queued without an active Jakarta Transactions transaction. The container-managed extended persistence context can only be injected into a stateful session bean. 12.5. Jakarta Persistence EntityManager Jakarta Persistence entity manager represents a connection to the persistence context. You can read from and write to the database defined by the persistence context using the entity manager. Persistence context is provided through the Java annotation @PersistenceContext in the javax.persistence package. The entity manager is provided through the Java class javax.persistence.EntityManager . In any managed bean, the EntityManager instance can be injected as shown below: Example: Entity Manager Injection @Stateless public class UserBean { @PersistenceContext EntityManager entitymanager; ... } 12.5.1. Application-Managed EntityManager Application-managed entity managers provide direct access to the underlying persistence provider, org.hibernate.jpa.HibernatePersistenceProvider . The scope of the application-managed entity manager is from when the application creates it and lasts until the application closes it. You can use the @PersistenceUnit annotation to inject a persistence unit into the javax.persistence.EntityManagerFactory interface, which returns an application-managed entity manager. Application-managed entity managers can be used when your application needs to access a persistence context that is not propagated with the Jakarta Transactions transaction across EntityManager instances in a particular persistence unit. In this case, each EntityManager instance creates a new, isolated persistence context. The EntityManager instance and its associated PersistenceContext is created and destroyed explicitly by your application. Application-managed entity managers can also be used when you cannot inject EntityManager instances directly, because the EntityManager instances are not thread-safe. EntityManagerFactory instances are thread-safe. Example: Application-Managed Entity Manager @PersistenceUnit EntityManagerFactory emf; EntityManager em; @Resource UserTransaction utx; ... em = emf.createEntityManager(); try { utx.begin(); em.persist(SomeEntity); em.merge(AnotherEntity); em.remove(ThirdEntity); utx.commit(); } catch (Exception e) { utx.rollback(); } 12.5.2. Container-Managed EntityManager Container-managed entity managers manage the underlying persistence provider for the application. They can use the transaction-scoped persistence contexts or the extended persistence contexts. The container-managed entity manager creates instances of the underlying persistence provider as needed. Every time a new underlying persistence provider org.hibernate.jpa.HibernatePersistenceProvider instance is created, a new persistence context is also created. 12.6. Working with the EntityManager When you have the persistence.xml file located in the /META-INF directory, the entity manager is loaded and has an active connection to the database. The EntityManager property can be used to bind the entity manager to JNDI and to add, update, remove and query entities. Important If you plan to use a security manager with Hibernate, be aware that Hibernate supports it only when EntityManagerFactory is bootstrapped by the JBoss EAP server. It is not supported when the EntityManagerFactory or SessionFactory is bootstrapped by the application. See Java Security Manager in How to Configure Server Security for more information about security managers. 12.6.1. Binding the EntityManager to JNDI By default, JBoss EAP does not bind the EntityManagerFactory to JNDI. You can explicitly configure this in the persistence.xml file of your application by setting the jboss.entity.manager.factory.jndi.name property. The value of this property should be the JNDI name to which you want to bind the EntityManagerFactory . You can also bind a container-managed transaction-scoped entity manager to JNDI by using the jboss.entity.manager.jndi.name property. Example: Binding the EntityManager and the EntityManagerFactory to JNDI <property name="jboss.entity.manager.jndi.name" value="java:/MyEntityManager"/> <property name="jboss.entity.manager.factory.jndi.name" value="java:/MyEntityManagerFactory"/> Example: Storing an Entity using the EntityManager public User createUser(User user) { entityManager.persist(user); return user; } Example: Updating an Entity using the EntityManager public void updateUser(User user) { entityManager.merge(user); } Example: Removing an Entity using the EntityManager public void deleteUser(String user) { User user = findUser(username); if (user != null) entityManager.remove(user); } Example: Querying an Entity using the EntityManager public User findUser(String username) { CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<User> criteria = builder.createQuery(User.class); Root<User> root = criteria.from(User.class); TypedQuery<User> query = entityManager .createQuery(criteria.select(root).where( builder.equal(root.<String> get("username"), username))); try { return query.getSingleResult(); } catch (NoResultException e) { return null; } } 12.7. Deploying the Persistence Unit A persistence unit is a logical grouping that includes: Configuration information for an entity manager factory and its entity managers. Classes managed by the entity managers. Mapping metadata specifying the mapping of the classes to the database. The persistence.xml file contains persistence unit configuration, including the datasource name. The JAR file or the directory whose /META-INF/ directory contains the persistence.xml file is termed as the root of the persistence unit. In Jakarta EE environments, the root of the persistence unit must be one of the following: An EJB-JAR file The /WEB-INF/classes/ directory of a WAR file A JAR file in the /WEB-INF/lib/ directory of a WAR file A JAR file in the EAR library directory An application client JAR file Example: Persistence Settings File <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_2.xsd" version="2.2"> <persistence-unit name="example" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <mapping-file>ormap.xml</mapping-file> <jar-file>TestApp.jar</jar-file> <class>org.test.Test</class> <shared-cache-mode>NONE</shared-cache-mode> <validation-mode>CALLBACK</validation-mode> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence> 12.8. Second-level Caches 12.8.1. About Second-level Caches A second-level cache is a local data store that holds information persisted outside the application session. The cache is managed by the persistence provider, improving runtime by keeping the data separate from the application. JBoss EAP supports caching for the following purposes: Web Session Clustering Stateful Session Bean Clustering SSO Clustering Hibernate Second-level Cache Jakarta Persistence Second-level Cache Warning Each cache container defines a repl and a dist cache. These caches should not be used directly by user applications. 12.8.1.1. Default Second-level Cache Provider Infinispan is the default second-level cache provider for JBoss EAP. Infinispan is a distributed in-memory key/value data store with optional schema, available under the Apache License 2.0. 12.8.1.1.1. Configuring a Second-level Cache in the Persistence Unit Note To ensure compatibility with future JBoss EAP releases, cache configuration should be customized using the Infinispan subsystem rather than persistence.xml property overrides. You can use the shared-cache-mode element of the persistence unit to configure the second-level cache. See Create a Simple Jakarta Persistence Application to create the persistence.xml file in Red Hat CodeReady Studio. Add the following to the persistence.xml file: <persistence-unit name="..."> (...) <!-- other configuration --> <shared-cache-mode> SHARED_CACHE_MODE </shared-cache-mode> <properties> <property name="hibernate.cache.use_second_level_cache" value="true" /> <property name="hibernate.cache.use_query_cache" value="true" /> </properties> </persistence-unit> The SHARED_CACHE_MODE element can take the following values: ALL : All entities should be considered cacheable. NONE : No entities should be considered cacheable. ENABLE_SELECTIVE : Only entities marked as cacheable should be considered cacheable. DISABLE_SELECTIVE : All entities except the ones explicitly marked as not cacheable should be considered cacheable. UNSPECIFIED : Behavior is not defined. Provider-specific defaults are applicable. Example: Changing the properties of entity and local-query caches using persistence.xml <persistence ... version="2.2"> <persistence-unit ...> ... <properties> <!-- Values below are not recommendations. Appropriate values should be determined based on system use/capacity. --> <!-- entity default overrides --> <property name="hibernate.cache.infinispan.entity.memory.size" value="5000"/> <property name="hibernate.cache.infinispan.entity.expiration.max_idle" value="300000"/> <!-- 5 minutes --> <property name="hibernate.cache.infinispan.entity.expiration.lifespan" value="1800000"/> <!-- 30 minutes --> <property name="hibernate.cache.infinispan.entity.expiration.wake_up_interval" value="300000"/> <!-- 5 minutes --> <!-- local-query default overrides --> <property name="hibernate.cache.infinispan.query.memory.size" value="5000"/> <property name="hibernate.cache.infinispan.query.expiration.max_idle" value="300000"/> <!-- 5 minutes --> <property name="hibernate.cache.infinispan.query.expiration.lifespan" value="1800000"/> <!-- 30 minutes --> <property name="hibernate.cache.infinispan.query.expiration.wake_up_interval" value="300000"/> <!-- 5 minutes --> </properties> </persistence-unit> </persistence> Table 12.1. Properties of entity and local-query caches Property Description memory.size Denotes the object-memory size. expiration.max_idle Denotes the maximum idle time (in milliseconds) a cache entry is maintained in the cache. expiration.lifespan Denotes the maximum lifespan (in milliseconds) after which a cache entry is expired. Defaults to 60 seconds. Infinite lifespan may be specified using -1. expiration.wake_up_interval Denotes the interval (in milliseconds) between subsequent runs to purge expired entries from the cache. Expiration can be disabled using -1. | [
"<persistence xmlns=\"http://java.sun.com/xml/ns/persistence\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_2.xsd\" version=\"2.2\"> <persistence-unit name=\"example\" transaction-type=\"JTA\"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <mapping-file>ormap.xml</mapping-file> <jar-file>TestApp.jar</jar-file> <class>org.test.Test</class> <shared-cache-mode>NONE</shared-cache-mode> <validation-mode>CALLBACK</validation-mode> <properties> <property name=\"hibernate.dialect\" value=\"org.hibernate.dialect.H2Dialect\"/> <property name=\"hibernate.hbm2ddl.auto\" value=\"create-drop\"/> </properties> </persistence-unit> </persistence>",
"@Stateless public class UserBean { @PersistenceContext EntityManager entitymanager; }",
"@PersistenceUnit EntityManagerFactory emf; EntityManager em; @Resource UserTransaction utx; em = emf.createEntityManager(); try { utx.begin(); em.persist(SomeEntity); em.merge(AnotherEntity); em.remove(ThirdEntity); utx.commit(); } catch (Exception e) { utx.rollback(); }",
"<property name=\"jboss.entity.manager.jndi.name\" value=\"java:/MyEntityManager\"/> <property name=\"jboss.entity.manager.factory.jndi.name\" value=\"java:/MyEntityManagerFactory\"/>",
"public User createUser(User user) { entityManager.persist(user); return user; }",
"public void updateUser(User user) { entityManager.merge(user); }",
"public void deleteUser(String user) { User user = findUser(username); if (user != null) entityManager.remove(user); }",
"public User findUser(String username) { CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<User> criteria = builder.createQuery(User.class); Root<User> root = criteria.from(User.class); TypedQuery<User> query = entityManager .createQuery(criteria.select(root).where( builder.equal(root.<String> get(\"username\"), username))); try { return query.getSingleResult(); } catch (NoResultException e) { return null; } }",
"<persistence xmlns=\"http://java.sun.com/xml/ns/persistence\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_2.xsd\" version=\"2.2\"> <persistence-unit name=\"example\" transaction-type=\"JTA\"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <mapping-file>ormap.xml</mapping-file> <jar-file>TestApp.jar</jar-file> <class>org.test.Test</class> <shared-cache-mode>NONE</shared-cache-mode> <validation-mode>CALLBACK</validation-mode> <properties> <property name=\"hibernate.dialect\" value=\"org.hibernate.dialect.H2Dialect\"/> <property name=\"hibernate.hbm2ddl.auto\" value=\"create-drop\"/> </properties> </persistence-unit> </persistence>",
"<persistence-unit name=\"...\"> (...) <!-- other configuration --> <shared-cache-mode> SHARED_CACHE_MODE </shared-cache-mode> <properties> <property name=\"hibernate.cache.use_second_level_cache\" value=\"true\" /> <property name=\"hibernate.cache.use_query_cache\" value=\"true\" /> </properties> </persistence-unit>",
"<persistence ... version=\"2.2\"> <persistence-unit ...> <properties> <!-- Values below are not recommendations. Appropriate values should be determined based on system use/capacity. --> <!-- entity default overrides --> <property name=\"hibernate.cache.infinispan.entity.memory.size\" value=\"5000\"/> <property name=\"hibernate.cache.infinispan.entity.expiration.max_idle\" value=\"300000\"/> <!-- 5 minutes --> <property name=\"hibernate.cache.infinispan.entity.expiration.lifespan\" value=\"1800000\"/> <!-- 30 minutes --> <property name=\"hibernate.cache.infinispan.entity.expiration.wake_up_interval\" value=\"300000\"/> <!-- 5 minutes --> <!-- local-query default overrides --> <property name=\"hibernate.cache.infinispan.query.memory.size\" value=\"5000\"/> <property name=\"hibernate.cache.infinispan.query.expiration.max_idle\" value=\"300000\"/> <!-- 5 minutes --> <property name=\"hibernate.cache.infinispan.query.expiration.lifespan\" value=\"1800000\"/> <!-- 30 minutes --> <property name=\"hibernate.cache.infinispan.query.expiration.wake_up_interval\" value=\"300000\"/> <!-- 5 minutes --> </properties> </persistence-unit> </persistence>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/java_persistence_api |
Chapter 6. Managing remote systems in the web console | Chapter 6. Managing remote systems in the web console You can connect to the remote systems and manage them in the RHEL 8 web console. You learn: The optimal topology of connected systems. How to add and remove remote systems. When, why, and how to use SSH keys for remote system authentication. How to configure a web console client to allow a user authenticated with a smart card to SSH to a remote host and access services on it. Prerequisites The SSH service is running on remote systems. 6.1. Remote system manager in the web console For security reasons, use the following network setup of remote systems managed by the the RHEL 8 web console: Configure one system with the web console as a bastion host. The bastion host is a system with opened HTTPS port. All other systems communicate through SSH. With the web interface running on the bastion host, you can reach all other systems through the SSH protocol using port 22 in the default configuration. 6.2. Adding remote hosts to the web console In the RHEL web console, you can manage remote systems after you add them with the corresponding credentials. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the RHEL 8 web console, click your <username> @ <hostname> in the top left corner of the Overview page. From the drop-down menu, click Add new host . In the Add new host dialog box, specify the host you want to add. Optional: Add the user name for the account to which you want to connect. You can use any user account of the remote system. However, if you use the credentials of a user account without administration privileges, you cannot perform administration tasks. If you use the same credentials as on your local system, the web console authenticates remote systems automatically every time you log in. Note that using the same credentials on more systems weakens the security. Optional: Click the Color field to change the color of the system. Click Add . Important The web console does not save passwords used to log in to remote systems, which means that you must log in again after each system restart. time you log in, click Log in placed on the main screen of the disconnected remote system to open the login dialog. Verification The new host is listed in the <username> @ <hostname> drop-down menu. 6.3. Enabling SSH login for a new host When you add a new host to the web console, you can also log in to the host with an SSH key. If you already have an SSH key on your system, the web console uses the existing one; otherwise, the web console can create a key. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the RHEL 8 web console, click your <username> @ <hostname> in the top left corner of the Overview page. From the drop-down menu, click Add new host . In the Add new host dialog box, specify the host you want to add. Add the user name for the account to which you want to connect. You can use any user account of the remote system. However, if you use a user account without administration privileges, you cannot perform administration tasks. Optional: Click the Color field to change the color of the system. Click Add . A new dialog window appears asking for a password. Enter the user account password. Check Authorize SSH key if you already have an SSH key. Check Create a new SSH key and authorize it if you do not have an SSH key. The web console creates the key. Add a password for the SSH key. Confirm the password. Click Log in . Verification Log out. Log back in. Click Log in in the Not connected to host screen. Select SSH key as your authentication option. Enter your key password. Click Log in . Additional resources Using secure communications between two systems with OpenSSH 6.4. Configuring a web console to allow a user authenticated with a smart card to SSH to a remote host without being asked to authenticate again After you have logged in to a user account on the RHEL web console, as an Identity Management (IdM) system administrator you might need to connect to remote machines by using the SSH protocol. You can use the constrained delegation feature to use SSH without being asked to authenticate again. Follow this procedure to configure the web console to use constrained delegation. In the example below, the web console session runs on the myhost.idm.example.com host and it is being configured to access the remote.idm.example.com host by using SSH on behalf of the authenticated user. Prerequisites You have obtained an IdM admin ticket-granting ticket (TGT). You have root access to remote.idm.example.com . The web console service is present in IdM. The remote.idm.example.com host is present in IdM. The web console has created an S4U2Proxy Kerberos ticket in the user session. To verify that this is the case, log in to the web console as an IdM user, open the Terminal page, and enter: Procedure Create a list of the target hosts that can be accessed by the delegation rule: Create a service delegation target: Add the target host to the delegation target: Allow cockpit sessions to access the target host list by creating a service delegation rule and adding the HTTP service Kerberos principal to it: Create a service delegation rule: Add the web console client to the delegation rule: Add the delegation target to the delegation rule: Enable Kerberos authentication on the remote.idm.example.com host: SSH to remote.idm.example.com as root . Open the /etc/ssh/sshd_config file for editing. Enable GSSAPIAuthentication by uncommenting the GSSAPIAuthentication no line and replacing it with GSSAPIAuthentication yes . Restart the SSH service on remote.idm.example.com so that the above changes take effect immediately: Additional resources Logging in to the web console with smart cards Constrained delegation in Identity Management 6.5. Using Ansible to configure a web console to allow a user authenticated with a smart card to SSH to a remote host without being asked to authenticate again After you have logged in to a user account on the RHEL web console, as an Identity Management (IdM) system administrator you might need to connect to remote machines by using the SSH protocol. You can use the constrained delegation feature to use SSH without being asked to authenticate again. Follow this procedure to use the servicedelegationrule and servicedelegationtarget ansible-freeipa modules to configure a web console to use constrained delegation. In the example below, the web console session runs on the myhost.idm.example.com host and it is being configured to access the remote.idm.example.com host by using SSH on behalf of the authenticated user. Prerequisites The IdM admin password. root access to remote.idm.example.com . The web console service is present in IdM. The remote.idm.example.com host is present in IdM. The web console has created an S4U2Proxy Kerberos ticket in the user session. To verify that this is the case, log in to the web console as an IdM user, open the Terminal page, and enter: You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create a web-console-smart-card-ssh.yml playbook with the following content: Create a task that ensures the presence of a delegation target: Add a task that adds the target host to the delegation target: Add a task that ensures the presence of a delegation rule: Add a task that ensures that the Kerberos principal of the web console client service is a member of the constrained delegation rule: Add a task that ensures that the constrained delegation rule is associated with the web-console-delegation-target delegation target: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Enable Kerberos authentication on remote.idm.example.com : SSH to remote.idm.example.com as root . Open the /etc/ssh/sshd_config file for editing. Enable GSSAPIAuthentication by uncommenting the GSSAPIAuthentication no line and replacing it with GSSAPIAuthentication yes . Additional resources Logging in to the web console with smart cards Constrained delegation in Identity Management README-servicedelegationrule.md and README-servicedelegationtarget.md in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/servicedelegationtarget and /usr/share/doc/ansible-freeipa/playbooks/servicedelegationrule directories | [
"klist Ticket cache: FILE:/run/user/1894000001/cockpit-session-3692.ccache Default principal: [email protected] Valid starting Expires Service principal 07/30/21 09:19:06 07/31/21 09:19:06 HTTP/[email protected] 07/30/21 09:19:06 07/31/21 09:19:06 krbtgt/[email protected] for client HTTP/[email protected]",
"ipa servicedelegationtarget-add cockpit-target",
"ipa servicedelegationtarget-add-member cockpit-target --principals=host/[email protected]",
"ipa servicedelegationrule-add cockpit-delegation",
"ipa servicedelegationrule-add-member cockpit-delegation --principals=HTTP/[email protected]",
"ipa servicedelegationrule-add-target cockpit-delegation --servicedelegationtargets=cockpit-target",
"systemctl try-restart sshd.service",
"klist Ticket cache: FILE:/run/user/1894000001/cockpit-session-3692.ccache Default principal: [email protected] Valid starting Expires Service principal 07/30/21 09:19:06 07/31/21 09:19:06 HTTP/[email protected] 07/30/21 09:19:06 07/31/21 09:19:06 krbtgt/[email protected] for client HTTP/[email protected]",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to create a constrained delegation target hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure servicedelegationtarget web-console-delegation-target is present ipaservicedelegationtarget: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-target",
"- name: Ensure servicedelegationtarget web-console-delegation-target member principal host/[email protected] is present ipaservicedelegationtarget: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-target principal: host/[email protected] action: member",
"- name: Ensure servicedelegationrule delegation-rule is present ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule",
"- name: Ensure the Kerberos principal of the web console client service is added to the servicedelegationrule web-console-delegation-rule ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule principal: HTTP/myhost.idm.example.com action: member",
"- name: Ensure a constrained delegation rule is associated with a specific delegation target ipaservicedelegationrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web-console-delegation-rule target: web-console-delegation-target action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory web-console-smart-card-ssh.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/managing-remote-systems-in-the-web-console_system-management-using-the-rhel-8-web-console |
Builds | Builds OpenShift Container Platform 4.12 Builds Red Hat OpenShift Documentation Team | [
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>",
"oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth",
"apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret",
"spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"",
"spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"",
"spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"",
"strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"",
"strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile",
"dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"",
"strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers",
"spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1",
"sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value",
"strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"",
"strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"",
"customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"oc set env <enter_variables>",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1",
"jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"",
"oc project <project_name>",
"oc new-app jenkins-ephemeral 1",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline",
"def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }",
"oc create -f nodejs-sample-pipeline.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml",
"oc start-build nodejs-sample-pipeline",
"FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]",
"FROM registry.access.redhat.com/ubi8/ubi RUN touch /tmp/build",
"#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}",
"oc new-build --binary --strategy=docker --name custom-builder-image",
"oc start-build custom-builder-image --from-dir . -F",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest",
"oc create -f buildconfig.yaml",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}",
"oc create -f imagestream.yaml",
"oc start-build sample-custom-build -F",
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"oc describe bc/<name-of-your-BuildConfig>",
"<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"oc describe bc <name>",
"curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"oc describe bc <name>",
"curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"",
"curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"oc describe bc <name>",
"kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"",
"type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"",
"type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1",
"Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.",
"type: \"ConfigChange\"",
"oc set triggers bc <name> --from-github",
"oc set triggers bc <name> --from-image='<image>'",
"oc set triggers bc <name> --from-bitbucket --remove",
"oc set triggers --help",
"postCommit: script: \"bundle exec rake test --verbose\"",
"postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]",
"postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]",
"oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose",
"oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2",
"resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"",
"spec: completionDeadlineSeconds: 1800",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2",
"oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest -n openshift",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source",
"oc tag --source=docker registry.redhat.io/ubi8/ubi:latest ubi:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi8/ubi:latest name: latest referencePolicy: type: Source",
"RUN rm /etc/rhsm-host",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem",
"oc create configmap yum-repos-d --from-file /path/to/satellite.repo",
"strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement",
"FROM registry.redhat.io/ubi8/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel",
"oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF",
"oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi8/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI",
"oc start-build my-csi-bc -F",
"build.build.openshift.io/my-csi-bc-1 started Caching blobs under \"/var/cache/blobs\". Pulling image registry.redhat.io/ubi8/ubi:latest Trying to pull registry.redhat.io/ubi8/ubi:latest Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi8/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time=\"2022-02-03T20:28:32Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time=\"2022-02-03T20:28:33Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time=\"2022-02-03T20:28:40Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time=\"2022-02-03T20:29:05Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time=\"2022-02-03T20:29:19Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 609507b059e STEP 8/9: ENV \"OPENSHIFT_BUILD_NAME\"=\"my-csi-bc-1\" \"OPENSHIFT_BUILD_NAMESPACE\"=\"my-csi-app-namespace\" --> cab2da3efc4 STEP 9/9: LABEL \"io.openshift.build.name\"=\"my-csi-bc-1\" \"io.openshift.build.namespace\"=\"my-csi-app-namespace\" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested",
"oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite",
"oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated",
"oc get clusterrole admin -o yaml | grep \"builds/docker\"",
"oc get clusterrole edit -o yaml | grep \"builds/docker\"",
"oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser",
"oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject",
"oc edit build.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists",
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/builds/index |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Core Protocol JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2020-10-08 11:28:44 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_core_protocol_jms_client/using_the_broker_with_the_examples |
Chapter 7. File Systems | Chapter 7. File Systems The autofs package now contains the README.autofs-schema file and an updated schema The samples/autofs.schema distribution file was out of date and incorrect. As a consequence, it is possible that somebody is using an incorrect LDAP schema. However, a change of the schema in use cannot be enforced. With this update: The README.autofs-schema file has been added to describe the problem and recommend which schema to use, if possible. The schema included in the autofs package has been updated to samples/autofs.schema.new . (BZ# 703846 ) A stale dentry object is no longer left in the dentry cache after a rename operation On an NFS file system, a stale dentry object was left in the dentry cache after a rename operation that replaced an existing object. As a consequence, if either the old or the new name contained 32 characters or more, the entry with the old name appeared accessible. The underlying source code has been modified to unhash the stale dentry . As a result, a rename operation no longer causes a stale dentry object to occur. (BZ#1080701) autofs mounts no longer enter an infinite loop after reaching a shutdown state If an autofs mount reached a shutdown state, and a mount request arrived and was processed before the mount-handling thread read the shutdown notification, the mount-handling thread exited without cleaning up the autofs mount. As a consequence, the main program never reached its exit condition and entered an infinite loop, as the autofs-managed mount was left mounted. To fix this bug, the exit condition check now takes place after each request is processed, and cleanup operations are now performed if an autofs mount has reached its shutdown state. As a result, the autofs daemon now exits as expected at shutdown. (BZ#1277033) automount no longer needs to be restarted to access maps stored on the NIS server Previously, the autofs utility did not wait for the NIS client service when starting. As a consequence, if the network map source was not available at program start, the master map could not be read, and the automount service had to be restarted to access maps stored on the NIS server. With this update, autofs waits until the master map is available to obtain a startup map. As a result, automount can access the map from the NIS domain, and autofs no longer needs to be restarted on every boot. If the NIS maps are still not available after the configured wait time, the autofs configuration master_wait option might need to be increased. In the majority of cases, the wait time used by the package is sufficient. (BZ#1350786) Setting the retry timeout can now prevent autofs from starting without mounts from SSSD When starting the autofs utility, the sss map source was previously sometimes not ready to provide map information, but sss did not return an appropriate error to distinguish between a map does not exist and a not available condition. As a consequence, automounting did not work correctly, and autofs started without mounts from SSSD. To fix this bug, autofs retries asking SSSD for the master map when the map does not exist error occurs for a configurable amount of time. Now, you can set the retry timeout to a suitable value so that the master map is read and autofs starts as expected. (BZ#1384404) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/bug_fixes_file_systems |
Chapter 7. Ceph user management | Chapter 7. Ceph user management As a storage administrator, you can manage the Ceph user base by providing authentication, and access control to objects in the Red Hat Ceph Storage cluster. Important Cephadm manages the client keyrings for the Red Hat Ceph Storage cluster as long as the clients are within the scope of Cephadm. Users should not modify the keyrings that are managed by Cephadm, unless there is troubleshooting. 7.1. Ceph user management background When Ceph runs with authentication and authorization enabled, you must specify a user name. If you do not specify a user name, Ceph will use the client.admin administrative user as the default user name. Alternatively, you may use the CEPH_ARGS environment variable to avoid re-entry of the user name and secret. Irrespective of the type of Ceph client, for example, block device, object store, file system, native API, or the Ceph command line, Ceph stores all data as objects within pools. Ceph users must have access to pools in order to read and write data. Additionally, administrative Ceph users must have permissions to execute Ceph's administrative commands. The following concepts can help you understand Ceph user management. Storage Cluster Users A user of the Red Hat Ceph Storage cluster is either an individual or as an application. Creating users allows you to control who can access the storage cluster, its pools, and the data within those pools. Ceph has the notion of a type of user. For the purposes of user management, the type will always be client . Ceph identifies users in period (.) delimited form consisting of the user type and the user ID. For example, TYPE.ID , client.admin , or client.user1 . The reason for user typing is that Ceph Monitors, and OSDs also use the Cephx protocol, but they are not clients. Distinguishing the user type helps to distinguish between client users and other users- streamlining access control, user monitoring and traceability. Sometimes Ceph's user type may seem confusing, because the Ceph command line allows you to specify a user with or without the type, depending upon the command line usage. If you specify --user or --id , you can omit the type. So client.user1 can be entered simply as user1 . If you specify --name or -n , you must specify the type and name, such as client.user1 . Red Hat recommends using the type and name as a best practice wherever possible. Note A Red Hat Ceph Storage cluster user is not the same as a Ceph Object Gateway user. The object gateway uses a Red Hat Ceph Storage cluster user to communicate between the gateway daemon and the storage cluster, but the gateway has its own user management functionality for its end users. Authorization capabilities Ceph uses the term "capabilities" (caps) to describe authorizing an authenticated user to exercise the functionality of the Ceph Monitors and OSDs. Capabilities can also restrict access to data within a pool or a namespace within a pool. A Ceph administrative user sets a user's capabilities when creating or updating a user. Capability syntax follows the form: Syntax Monitor Caps: Monitor capabilities include r , w , x , allow profile CAP , and profile rbd . Example OSD Caps: OSD capabilities include r , w , x , class-read , class-write , profile osd , profile rbd , and profile rbd-read-only . Additionally, OSD capabilities also allow for pool and namespace settings. : Syntax Note The Ceph Object Gateway daemon ( radosgw ) is a client of the Ceph storage cluster, so it isn't represented as a Ceph storage cluster daemon type. The following entries describe each capability. allow Precedes access settings for a daemon. r Gives the user read access. Required with monitors to retrieve the CRUSH map. w Gives the user write access to objects. x Gives the user the capability to call class methods (that is, both read and write) and to conduct auth operations on monitors. class-read Gives the user the capability to call class read methods. Subset of x . class-write Gives the user the capability to call class write methods. Subset of x . * Gives the user read, write and execute permissions for a particular daemon or pool, and the ability to execute admin commands. profile osd Gives a user permissions to connect as an OSD to other OSDs or monitors. Conferred on OSDs to enable OSDs to handle replication heartbeat traffic and status reporting. profile bootstrap-osd Gives a user permissions to bootstrap an OSD, so that they have permissions to add keys when bootstrapping an OSD. profile rbd Gives a user read-write access to the Ceph Block Devices. profile rbd-read-only Gives a user read-only access to the Ceph Block Devices. Pool A pool defines a storage strategy for Ceph clients, and acts as a logical partition for that strategy. In Ceph deployments, it is common to create a pool to support different types of use cases. For example, cloud volumes or images, object storage, hot storage, cold storage, and so on. When deploying Ceph as a back end for OpenStack, a typical deployment would have pools for volumes, images, backups and virtual machines, and users such as client.glance , client.cinder , and so on. Namespace Objects within a pool can be associated to a namespace- a logical group of objects within the pool. A user's access to a pool can be associated with a namespace such that reads and writes by the user take place only within the namespace. Objects written to a namespace within the pool can only be accessed by users who have access to the namespace. Note Currently, namespaces are only useful for applications written on top of librados . Ceph clients such as block device and object storage do not currently support this feature. The rationale for namespaces is that pools can be a computationally expensive method of segregating data by use case, because each pool creates a set of placement groups that get mapped to OSDs. If multiple pools use the same CRUSH hierarchy and ruleset, OSD performance may degrade as load increases. For example, a pool should have approximately 100 placement groups per OSD. So an exemplary cluster with 1000 OSDs would have 100,000 placement groups for one pool. Each pool mapped to the same CRUSH hierarchy and ruleset would create another 100,000 placement groups in the exemplary cluster. By contrast, writing an object to a namespace simply associates the namespace to the object name with out the computational overhead of a separate pool. Rather than creating a separate pool for a user or set of users, you may use a namespace. Note Only available using librados at this time. Additional Resources See the Red Hat Ceph Storage Configuration Guide for details on configuring the use of authentication. 7.2. Managing Ceph users As a storage administrator, you can manage Ceph users by creating, modifying, deleting, and importing users. A Ceph client user can be either individuals or applications, which use Ceph clients to interact with the Red Hat Ceph Storage cluster daemons. 7.2.1. Listing Ceph users You can list the users in the storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To list the users in the storage cluster, execute the following: Example Note The TYPE.ID notation for users applies such that osd.0 is a user of type osd and its ID is 0 , client.admin is a user of type client and its ID is admin , that is, the default client.admin user. Note also that each entry has a key: VALUE entry, and one or more caps: entries. You may use the -o FILE_NAME option with ceph auth list to save the output to a file. 7.2.2. Display Ceph user information You can display a Ceph's user information using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To retrieve a specific user, key and capabilities, execute the following: Syntax Example You can also use the -o FILE_NAME option. Syntax Example The auth export command is identical to auth get , but also prints out the internal auid , which isn't relevant to end users. 7.2.3. Add a new Ceph user Adding a user creates a username, that is, TYPE.ID , a secret key and any capabilities included in the command you use to create the user. A user's key enables the user to authenticate with the Ceph storage cluster. The user's capabilities authorize the user to read, write, or execute on Ceph monitors ( mon ), Ceph OSDs ( osd ) or Ceph Metadata Servers ( mds ). There are a few ways to add a user: ceph auth add : This command is the canonical way to add a user. It will create the user, generate a key and add any specified capabilities. ceph auth get-or-create : This command is often the most convenient way to create a user, because it returns a keyfile format with the user name (in brackets) and the key. If the user already exists, this command simply returns the user name and key in the keyfile format. You may use the -o FILE_NAME option to save the output to a file. ceph auth get-or-create-key : This command is a convenient way to create a user and return the user's key only. This is useful for clients that need the key only, for example, libvirt . If the user already exists, this command simply returns the key. You may use the -o FILE_NAME option to save the output to a file. When creating client users, you may create a user with no capabilities. A user with no capabilities is useless beyond mere authentication, because the client cannot retrieve the cluster map from the monitor. However, you can create a user with no capabilities if you wish to defer adding capabilities later using the ceph auth caps command. A typical user has at least read capabilities on the Ceph monitor and read and write capability on Ceph OSDs. Additionally, a user's OSD permissions are often restricted to accessing a particular pool. : Important If you provide a user with capabilities to OSDs, but you DO NOT restrict access to particular pools, the user will have access to ALL pools in the cluster! 7.2.4. Modifying a Ceph User The ceph auth caps command allows you to specify a user and change the user's capabilities. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To add capabilities, use the form: Syntax Example To remove a capability, you may reset the capability. If you want the user to have no access to a particular daemon that was previously set, specify an empty string: Example Additional Resources See Authorization capabilities for additional details on capabilities. 7.2.5. Deleting a Ceph user You can delete a user from the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To delete a user, use ceph auth del : Syntax Example 7.2.6. Print a Ceph user key You can display a Ceph user's key information using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Print a user's authentication key to standard output: Syntax Example | [
"DAEMON_TYPE 'allow CAPABILITY ' [ DAEMON_TYPE 'allow CAPABILITY ']",
"mon 'allow rwx` mon 'allow profile osd'",
"osd 'allow CAPABILITY ' [pool= POOL_NAME ] [namespace= NAMESPACE_NAME ]",
"ceph auth list installed auth entries: osd.10 key: AQBW7U5gqOsEExAAg/CxSwZ/gSh8iOsDV3iQOA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.11 key: AQBX7U5gtj/JIhAAPsLBNG+SfC2eMVEFkl3vfA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.9 key: AQBV7U5g1XDULhAAKo2tw6ZhH1jki5aVui2v7g== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQADYEtgFfD3ExAAwH+C1qO7MSLE4TWRfD2g6g== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQAHYEtgpbkANBAANqoFlvzEXFwD8oB0w3TF4Q== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQAHYEtg3dcANBAAVQf6brq3sxTSrCrPe0pKVQ== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQAHYEtgD/QANBAATS9DuP3DbxEl86MTyKEmdw== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQAHYEtgjxEBNBAANho25V9tWNNvIKnHknW59A== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rbd-mirror key: AQAHYEtgdE8BNBAAr6rLYxZci0b2hoIgH9GXYw== caps: [mon] allow profile bootstrap-rbd-mirror client.bootstrap-rgw key: AQAHYEtgwGkBNBAAuRzI4WSrnowBhZxr2XtTFg== caps: [mon] allow profile bootstrap-rgw client.crash.host04 key: AQCQYEtgz8lGGhAAy5bJS8VH9fMdxuAZ3CqX5Q== caps: [mgr] profile crash caps: [mon] profile crash client.crash.host02 key: AQDuYUtgqgfdOhAAsyX+Mo35M+HFpURGad7nJA== caps: [mgr] profile crash caps: [mon] profile crash client.crash.host03 key: AQB98E5g5jHZAxAAklWSvmDsh2JaL5G7FvMrrA== caps: [mgr] profile crash caps: [mon] profile crash client.nfs.foo.host03 key: AQCgTk9gm+HvMxAAHbjG+XpdwL6prM/uMcdPdQ== caps: [mon] allow r caps: [osd] allow rw pool=nfs-ganesha namespace=foo client.nfs.foo.host03-rgw key: AQCgTk9g8sJQNhAAPykcoYUuPc7IjubaFx09HQ== caps: [mon] allow r caps: [osd] allow rwx tag rgw *=* client.rgw.test_realm.test_zone.host01.hgbvnq key: AQD5RE9gAQKdCRAAJzxDwD/dJObbInp9J95sXw== caps: [mgr] allow rw caps: [mon] allow * caps: [osd] allow rwx tag rgw *=* client.rgw.test_realm.test_zone.host02.yqqilm key: AQD0RE9gkxA4ExAAFXp3pLJWdIhsyTe2ZR6Ilw== caps: [mgr] allow rw caps: [mon] allow * caps: [osd] allow rwx tag rgw *=* mgr.host01.hdhzwn key: AQAEYEtg3lhIBxAAmHodoIpdvnxK0llWF80ltQ== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow * mgr.host02.eobuuv key: AQAn6U5gzUuiABAA2Fed+jPM1xwb4XDYtrQxaQ== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow * mgr.host03.wquwpj key: AQAd6U5gIzWsLBAAbOKUKZlUcAVe9kBLfajMKw== caps: [mds] allow * caps: [mon] profile mgr caps: [osd] allow *",
"ceph auth export TYPE . ID",
"ceph auth export mgr.host02.eobuuv",
"ceph auth export TYPE . ID -o FILE_NAME",
"ceph auth export osd.9 -o filename export auth(key=AQBV7U5g1XDULhAAKo2tw6ZhH1jki5aVui2v7g==)",
"ceph auth add client.john mon 'allow r' osd 'allow rw pool=mypool' ceph auth get-or-create client.paul mon 'allow r' osd 'allow rw pool=mypool' ceph auth get-or-create client.george mon 'allow r' osd 'allow rw pool=mypool' -o george.keyring ceph auth get-or-create-key client.ringo mon 'allow r' osd 'allow rw pool=mypool' -o ringo.key",
"ceph auth caps USERTYPE . USERID DAEMON 'allow [r|w|x|*|...] [pool= POOL_NAME ] [namespace= NAMESPACE_NAME ]'",
"ceph auth caps client.john mon 'allow r' osd 'allow rw pool=mypool' ceph auth caps client.paul mon 'allow rw' osd 'allow rwx pool=mypool' ceph auth caps client.brian-manager mon 'allow *' osd 'allow *'",
"ceph auth caps client.ringo mon ' ' osd ' '",
"ceph auth del TYPE . ID",
"ceph auth del osd.6",
"ceph auth print-key TYPE . ID",
"ceph auth print-key osd.6 AQBQ7U5gAry3JRAA3NoPrqBBThpFMcRL6Sr+5w==[ceph: root@host01 /]#"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/ceph-user-management |
Chapter 15. Viewing cluster dashboards | Chapter 15. Viewing cluster dashboards The Logging/Elasticsearch Nodes and Openshift Logging dashboards in the OpenShift Container Platform web console show in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems. The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster level, including cluster resources, garbage collection, shards in the cluster, and Fluentd statistics. The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node level, including details on indexing, shards, resources, and so forth. Note For more detailed data, click the Grafana UI link in a dashboard to launch the Grafana dashboard. Grafana is shipped with OpenShift cluster monitoring . 15.1. Accessing the Elasticsearch and OpenShift Logging dashboards You can view the Logging/Elasticsearch Nodes and OpenShift Logging dashboards in the OpenShift Container Platform web console. Procedure To launch the dashboards: In the OpenShift Container Platform web console, click Observe Dashboards . On the Dashboards page, select Logging/Elasticsearch Nodes or OpenShift Logging from the Dashboard menu. For the Logging/Elasticsearch Nodes dashboard, you can select the Elasticsearch node you want to view and set the data resolution. The appropriate dashboard is displayed, showing multiple charts of data. Optional: Select a different time range to display or refresh rate for the data from the Time Range and Refresh Interval menus. Note For more detailed data, click the Grafana UI link to launch the Grafana dashboard. For information on the dashboard charts, see About the OpenShift Logging dashboard and About the Logging/Elastisearch Nodes dashboard . 15.2. About the OpenShift Logging dashboard The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster-level that you can use to diagnose and anticipate problems. Table 15.1. OpenShift Logging charts Metric Description Elastic Cluster Status The current Elasticsearch status: ONLINE - Indicates that the Elasticsearch instance is online. OFFLINE - Indicates that the Elasticsearch instance is offline. Elastic Nodes The total number of Elasticsearch nodes in the Elasticsearch instance. Elastic Shards The total number of Elasticsearch shards in the Elasticsearch instance. Elastic Documents The total number of Elasticsearch documents in the Elasticsearch instance. Total Index Size on Disk The total disk space that is being used for the Elasticsearch indices. Elastic Pending Tasks The total number of Elasticsearch changes that have not been completed, such as index creation, index mapping, shard allocation, or shard failure. Elastic JVM GC time The amount of time that the JVM spent executing Elasticsearch garbage collection operations in the cluster. Elastic JVM GC Rate The total number of times that JVM executed garbage activities per second. Elastic Query/Fetch Latency Sum Query latency: The average time each Elasticsearch search query takes to execute. Fetch latency: The average time each Elasticsearch search query spends fetching data. Fetch latency typically takes less time than query latency. If fetch latency is consistently increasing, it might indicate slow disks, data enrichment, or large requests with too many results. Elastic Query Rate The total queries executed against the Elasticsearch instance per second for each Elasticsearch node. CPU The amount of CPU used by Elasticsearch, Fluentd, and Kibana, shown for each component. Elastic JVM Heap Used The amount of JVM memory used. In a healthy cluster, the graph shows regular drops as memory is freed by JVM garbage collection. Elasticsearch Disk Usage The total disk space used by the Elasticsearch instance for each Elasticsearch node. File Descriptors In Use The total number of file descriptors used by Elasticsearch, Fluentd, and Kibana. FluentD emit count The total number of Fluentd messages per second for the Fluentd default output, and the retry count for the default output. FluentD Buffer Availability The percent of the Fluentd buffer that is available for chunks. A full buffer might indicate that Fluentd is not able to process the number of logs received. Elastic rx bytes The total number of bytes that Elasticsearch has received from FluentD, the Elasticsearch nodes, and other sources. Elastic Index Failure Rate The total number of times per second that an Elasticsearch index fails. A high rate might indicate an issue with indexing. FluentD Output Error Rate The total number of times per second that FluentD is not able to output logs. 15.3. Charts on the Logging/Elasticsearch nodes dashboard The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node-level, for further diagnostics. Elasticsearch status The Logging/Elasticsearch Nodes dashboard contains the following charts about the status of your Elasticsearch instance. Table 15.2. Elasticsearch status fields Metric Description Cluster status The cluster health status during the selected time period, using the Elasticsearch green, yellow, and red statuses: 0 - Indicates that the Elasticsearch instance is in green status, which means that all shards are allocated. 1 - Indicates that the Elasticsearch instance is in yellow status, which means that replica shards for at least one shard are not allocated. 2 - Indicates that the Elasticsearch instance is in red status, which means that at least one primary shard and its replicas are not allocated. Cluster nodes The total number of Elasticsearch nodes in the cluster. Cluster data nodes The number of Elasticsearch data nodes in the cluster. Cluster pending tasks The number of cluster state changes that are not finished and are waiting in a cluster queue, for example, index creation, index deletion, or shard allocation. A growing trend indicates that the cluster is not able to keep up with changes. Elasticsearch cluster index shard status Each Elasticsearch index is a logical group of one or more shards, which are basic units of persisted data. There are two types of index shards: primary shards, and replica shards. When a document is indexed into an index, it is stored in one of its primary shards and copied into every replica of that shard. The number of primary shards is specified when the index is created, and the number cannot change during index lifetime. You can change the number of replica shards at any time. The index shard can be in several states depending on its lifecycle phase or events occurring in the cluster. When the shard is able to perform search and indexing requests, the shard is active. If the shard cannot perform these requests, the shard is non-active. A shard might be non-active if the shard is initializing, reallocating, unassigned, and so forth. Index shards consist of a number of smaller internal blocks, called index segments, which are physical representations of the data. An index segment is a relatively small, immutable Lucene index that is created when Lucene commits newly-indexed data. Lucene, a search library used by Elasticsearch, merges index segments into larger segments in the background to keep the total number of segments low. If the process of merging segments is slower than the speed at which new segments are created, it could indicate a problem. When Lucene performs data operations, such as a search operation, Lucene performs the operation against the index segments in the relevant index. For that purpose, each segment contains specific data structures that are loaded in the memory and mapped. Index mapping can have a significant impact on the memory used by segment data structures. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch index shards. Table 15.3. Elasticsearch cluster shard status charts Metric Description Cluster active shards The number of active primary shards and the total number of shards, including replicas, in the cluster. If the number of shards grows higher, the cluster performance can start degrading. Cluster initializing shards The number of non-active shards in the cluster. A non-active shard is one that is initializing, being reallocated to a different node, or is unassigned. A cluster typically has non-active shards for short periods. A growing number of non-active shards over longer periods could indicate a problem. Cluster relocating shards The number of shards that Elasticsearch is relocating to a new node. Elasticsearch relocates nodes for multiple reasons, such as high memory use on a node or after a new node is added to the cluster. Cluster unassigned shards The number of unassigned shards. Elasticsearch shards might be unassigned for reasons such as a new index being added or the failure of a node. Elasticsearch node metrics Each Elasticsearch node has a finite amount of resources that can be used to process tasks. When all the resources are being used and Elasticsearch attempts to perform a new task, Elasticsearch put the tasks into a queue until some resources become available. The Logging/Elasticsearch Nodes dashboard contains the following charts about resource usage for a selected node and the number of tasks waiting in the Elasticsearch queue. Table 15.4. Elasticsearch node metric charts Metric Description ThreadPool tasks The number of waiting tasks in individual queues, shown by task type. A long-term accumulation of tasks in any queue could indicate node resource shortages or some other problem. CPU usage The amount of CPU being used by the selected Elasticsearch node as a percentage of the total CPU allocated to the host container. Memory usage The amount of memory being used by the selected Elasticsearch node. Disk usage The total disk space being used for index data and metadata on the selected Elasticsearch node. Documents indexing rate The rate that documents are indexed on the selected Elasticsearch node. Indexing latency The time taken to index the documents on the selected Elasticsearch node. Indexing latency can be affected by many factors, such as JVM Heap memory and overall load. A growing latency indicates a resource capacity shortage in the instance. Search rate The number of search requests run on the selected Elasticsearch node. Search latency The time taken to complete search requests on the selected Elasticsearch node. Search latency can be affected by many factors. A growing latency indicates a resource capacity shortage in the instance. Documents count (with replicas) The number of Elasticsearch documents stored on the selected Elasticsearch node, including documents stored in both the primary shards and replica shards that are allocated on the node. Documents deleting rate The number of Elasticsearch documents being deleted from any of the index shards that are allocated to the selected Elasticsearch node. Documents merging rate The number of Elasticsearch documents being merged in any of index shards that are allocated to the selected Elasticsearch node. Elasticsearch node fielddata Fielddata is an Elasticsearch data structure that holds lists of terms in an index and is kept in the JVM Heap. Because fielddata building is an expensive operation, Elasticsearch caches the fielddata structures. Elasticsearch can evict a fielddata cache when the underlying index segment is deleted or merged, or if there is not enough JVM HEAP memory for all the fielddata caches. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch fielddata. Table 15.5. Elasticsearch node fielddata charts Metric Description Fielddata memory size The amount of JVM Heap used for the fielddata cache on the selected Elasticsearch node. Fielddata evictions The number of fielddata structures that were deleted from the selected Elasticsearch node. Elasticsearch node query cache If the data stored in the index does not change, search query results are cached in a node-level query cache for reuse by Elasticsearch. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch node query cache. Table 15.6. Elasticsearch node query charts Metric Description Query cache size The total amount of memory used for the query cache for all the shards allocated to the selected Elasticsearch node. Query cache evictions The number of query cache evictions on the selected Elasticsearch node. Query cache hits The number of query cache hits on the selected Elasticsearch node. Query cache misses The number of query cache misses on the selected Elasticsearch node. Elasticsearch index throttling When indexing documents, Elasticsearch stores the documents in index segments, which are physical representations of the data. At the same time, Elasticsearch periodically merges smaller segments into a larger segment as a way to optimize resource use. If the indexing is faster then the ability to merge segments, the merge process does not complete quickly enough, which can lead to issues with searches and performance. To prevent this situation, Elasticsearch throttles indexing, typically by reducing the number of threads allocated to indexing down to a single thread. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch index throttling. Table 15.7. Index throttling charts Metric Description Indexing throttling The amount of time that Elasticsearch has been throttling the indexing operations on the selected Elasticsearch node. Merging throttling The amount of time that Elasticsearch has been throttling the segment merge operations on the selected Elasticsearch node. Node JVM Heap statistics The Logging/Elasticsearch Nodes dashboard contains the following charts about JVM Heap operations. Table 15.8. JVM Heap statistic charts Metric Description Heap used The amount of the total allocated JVM Heap space that is used on the selected Elasticsearch node. GC count The number of garbage collection operations that have been run on the selected Elasticsearch node, by old and young garbage collection. GC time The amount of time that the JVM spent running garbage collection operations on the selected Elasticsearch node, by old and young garbage collection. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-dashboards |
Hosted control planes | Hosted control planes OpenShift Container Platform 4.15 Using hosted control planes with OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"apiVersion: v1 data: supported-versions: '{\"versions\":[\"4.15\"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift",
"oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID",
"spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID",
"oc get cloudcredentials <hosted_cluster_name> -n <hosted_cluster_namespace> -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get authentication cluster --kubeconfig <hosted_cluster_name>.kubeconfig -o jsonpath --template '{.spec.serviceAccountIssuer }'",
"https://aos-hypershift-ci-oidc-29999.s3.us-east-2.amazonaws.com/hypershift-ci-29999",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply credentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: \"example.sh/unregistered\" value: \"true\" effect: \"NoExecute\"",
"oc edit nodepool <nodepool_name> --namespace clusters 1",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile",
"oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME AGE default 7m36s rendered 7m36s tuned-1 65s",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio",
"vm.dirty_ratio = 55",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-sriov-network-operator",
"NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.15.0-202211021237 SR-IOV Network Operator 4.15.0-202211021237 sriov-network-operator.4.15.0-202210290517 Succeeded",
"oc get pods -n openshift-sriov-network-operator",
"oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: featureGate: featureSet: TechPreviewNoUpgrade 3",
"oc get featuregate cluster -o yaml",
"spec: autoscaling: {} channel: stable-4.y 1 clusterID: d6d42268-7dff-4d37-92cf-691bd2d42f41 configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: dev11.red-chesterfield.com privateZoneID: Z0180092I0DQRKL55LN0 publicZoneID: Z00206462VG6ZP0H2QLWK",
"oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml",
"version: availableUpdates: - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:b7517d13514c6308ae16c5fd8108133754eb922cd37403ed27c846c129e67a9a url: https://access.redhat.com/errata/RHBA-2024:6401 version: 4.16.11 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:d08e7c8374142c239a07d7b27d1170eae2b0d9f00ccf074c3f13228a1761c162 url: https://access.redhat.com/errata/RHSA-2024:6004 version: 4.16.10 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:6a80ac72a60635a313ae511f0959cc267a21a89c7654f1c15ee16657aafa41a0 url: https://access.redhat.com/errata/RHBA-2024:5757 version: 4.16.9 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:ea624ae7d91d3f15094e9e15037244679678bdc89e5a29834b2ddb7e1d9b57e6 url: https://access.redhat.com/errata/RHSA-2024:5422 version: 4.16.8 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7 url: https://access.redhat.com/errata/RHSA-2024:4965 version: 4.16.6 - channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:f828eda3eaac179e9463ec7b1ed6baeba2cd5bd3f1dd56655796c86260db819b url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 conditionalUpdates: - conditions: - lastTransitionTime: \"2024-09-23T22:33:38Z\" message: |- Could not evaluate exposure to update risk SRIOVFailedToConfigureVF (creating PromQL round-tripper: unable to load specified CA cert /etc/tls/service-ca/service-ca.crt: open /etc/tls/service-ca/service-ca.crt: no such file or directory) SRIOVFailedToConfigureVF description: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. SRIOVFailedToConfigureVF URL: https://issues.redhat.com/browse/NHE-1171 reason: EvaluationFailed status: Unknown type: Recommended release: channels: - candidate-4.16 - candidate-4.17 - eus-4.16 - fast-4.16 - stable-4.16 image: quay.io/openshift-release-dev/ocp-release@sha256:fb321a3f50596b43704dbbed2e51fdefd7a7fd488ee99655d03784d0cd02283f url: https://access.redhat.com/errata/RHSA-2024:5107 version: 4.16.7 risks: - matchingRules: - promql: promql: | group(csv_succeeded{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\", name=~\"sriov-network-operator[.].*\"}) or 0 * group(csv_count{_id=\"d6d42268-7dff-4d37-92cf-691bd2d42f41\"}) type: PromQL message: OCP Versions 4.14.34, 4.15.25, 4.16.7 and ALL subsequent versions include kernel datastructure changes which are not compatible with older versions of the SR-IOV operator. Please update SR-IOV operator to versions dated 20240826 or newer before updating OCP. name: SRIOVFailedToConfigureVF url: https://issues.redhat.com/browse/NHE-1171",
"apiVersion: v1 data: server-version: 2f6cfe21a0861dea3130f3bed0d3ae5553b8c28b supported-versions: '{\"versions\":[\"4.17\",\"4.16\",\"4.15\",\"4.14\"]}' kind: ConfigMap metadata: creationTimestamp: \"2024-06-20T07:12:31Z\" labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift resourceVersion: \"927029\" uid: f6336f91-33d3-472d-b747-94abae725f70",
"Client Version: openshift/hypershift: fe67b47fb60e483fe60e4755a02b3be393256343. Latest supported OCP: 4.17.0 Server Version: 05864f61f24a8517731664f8091cedcfc5f9b60d Server Supports OCP Versions: 4.17, 4.16, 4.15, 4.14",
"oc patch nodepool <node_pool_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"nodeDrainTimeout\":\"60s\",\"release\":{\"image\":\"<openshift_release_image>\"}}}' 1 2",
"oc get -n <hosted_cluster_namespace> nodepool <node_pool_name> -o yaml",
"status: conditions: - lastTransitionTime: \"2024-05-20T15:00:40Z\" message: 'Using release image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64' 1 reason: AsExpected status: \"True\" type: ValidReleaseImage",
"oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \"hypershift.openshift.io/force-upgrade-to=<openshift_release_image>\" --overwrite 1 2",
"oc patch hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> --type=merge -p '{\"spec\":{\"release\":{\"image\":\"<openshift_release_image>\"}}}'",
"oc get -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> -o yaml",
"status: conditions: - lastTransitionTime: \"2024-05-20T15:01:01Z\" message: Payload loaded version=\"4.y.z\" image=\"quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64\" 1 status: \"True\" type: ClusterVersionReleaseAccepted # version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 2 version: 4.y.z",
"oc set env -n hypershift deployment/operator METRICS_SET=All",
"kubeAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"scheduler_(e2e_scheduling_latency_microseconds|scheduling_algorithm_predicate_evaluation|scheduling_algorithm_priority_evaluation|scheduling_algorithm_preemption_evaluation|scheduling_algorithm_latency_microseconds|binding_latency_microseconds|scheduling_latency_seconds)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_(request_count|request_latencies|request_latencies_summary|dropped_requests|storage_data_key_generation_latencies_microseconds|storage_transformation_failures_total|storage_transformation_latencies_microseconds|proxy_tunnel_sync_latency_secs)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"docker_(operations|operations_latency_microseconds|operations_errors|operations_timeout)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"reflector_(items_per_list|items_per_watch|list_duration_seconds|lists_total|short_watches_total|watch_duration_seconds|watches_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"etcd_(helper_cache_hit_count|helper_cache_miss_count|helper_cache_entry_count|request_cache_get_latencies_summary|request_cache_add_latencies_summary|request_latencies_summary)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"transformation_(transformation_latencies_microseconds|failures_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"network_plugin_operations_latency_microseconds|sync_proxy_rules_latency_microseconds|rest_client_request_latency_seconds\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] kubeControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"rest_client_request_latency_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"root_ca_cert_publisher_sync_duration_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] openshiftAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] openshiftControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] openshiftRouteControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] olm: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] catalogOperator: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] cvo: - action: drop regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"]",
"kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: \"--monitoring-dashboards\" installFlagsToRemove: \"\"",
"- name: MONITORING_DASHBOARDS value: \"1\"",
"oc rsh -n openshift-etcd -c etcd <etcd_pod_name>",
"sh-4.4# etcdctl endpoint status -w table",
"+------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +------------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.1xxx.20:2379 | 8fxxxxxxxxxx | 3.5.12 | 123 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.21:2379 | a5xxxxxxxxxx | 3.5.12 | 122 MB | false | false | 10 | 180156 | 180156 | | | https://192.168.1xxx.22:2379 | 7cxxxxxxxxxx | 3.5.12 | 124 MB | true | false | 10 | 180156 | 180156 | | +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m",
"oc delete pods etcd-2 -n openshift-etcd",
"oc get pods -l app=etcd -n openshift-etcd",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s",
"CLUSTER_NAME=my-cluster",
"HOSTED_CLUSTER_NAMESPACE=clusters",
"CONTROL_PLANE_NAMESPACE=\"USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}\"",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"ETCD_POD=etcd-0",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=https://localhost:2379 snapshot save /var/lib/snapshot.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2",
"ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }')",
"cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data",
"DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2)",
"oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db --data-dir=/var/lib/data --skip-hash-check --name etcd-0 --initial-cluster-token=etcd-cluster --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w",
"oc scale deployment -n USD{CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"\"}}' --type=merge",
"oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db",
"BUCKET_NAME=somebucket FILEPATH=\"/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db",
"oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {\"activeKey\":{\"name\":\"<hosted_cluster_name>-etcd-encryption-key\"}}",
"oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}'",
"ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-\"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT})",
"spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - \"USD{ETCD_SNAPSHOT_URL}\" managementType: Managed",
"--external-dns-provider=aws --external-dns-credentials=<path_to_aws_credentials_file> --external-dns-domain-filter=<basedomain>",
"oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME}",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"ETCD Backup ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH=\"/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac \"USD{SECRET_KEY}\" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done",
"mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} chmod 700 USD{BACKUP_DIR}/namespaces/ HostedCluster echo \"Backing Up HostedCluster Objects:\" oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml echo \"--> HostedCluster\" sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml NodePool oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml echo \"--> NodePool\" sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml Secrets in the HC Namespace echo \"--> HostedCluster Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep \"^USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done Secrets in the HC Control Plane Namespace echo \"--> HostedCluster ControlPlane Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v \"docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done Hosted Control Plane echo \"--> HostedControlPlane:\" oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml Cluster echo \"--> Cluster:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml AWS Cluster echo \"--> AWS Cluster:\" oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml AWS MachineTemplate echo \"--> AWS Machine Template:\" oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml AWS Machines echo \"--> AWS Machine:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done MachineDeployments echo \"--> HostedCluster MachineDeployments:\" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done MachineSets echo \"--> HostedCluster MachineSets:\" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done Machines echo \"--> HostedCluster Machine:\" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done",
"oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"function clean_routes() { if [[ -z \"USD{1}\" ]];then echo \"Give me the NS where to clean the routes\" exit 1 fi # Constants if [[ -z \"USD{2}\" ]];then echo \"Give me the Route53 zone ID\" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo \"Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}...\" echo \"Try: (USD{count}/USD{timeout})\" sleep 10 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for cleaning the Route53 DNS records\" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } SAMPLE: clean_routes \"<HC ControlPlane Namespace>\" \"<AWS_ZONE_ID>\" clean_routes \"USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}\" \"USD{AWS_ZONE_ID}\"",
"Just in case export KUBECONFIG=USD{MGMT2_KUBECONFIG} BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup Namespace deletion in the destination Management cluster oc delete ns USD{HC_CLUSTER_NS} || true oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true",
"Namespace creation oc new-project USD{HC_CLUSTER_NS} oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-*",
"Secrets oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* Cluster oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-*",
"AWS oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* Machines oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-*",
"ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT=\"s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - \"USD{ETCD_SNAPSHOT_URL}\" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e \"/type: PersistentVolume/r USD{HC_RESTORE_FILE}\" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == \"\" ]];then echo \"Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace\" oc apply -f USD{HC_NEW_FILE} else echo \"HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step\" fi",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-*",
"timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo \"Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}\" echo \"Try: (USD{count}/USD{timeout})\" sleep 30 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for Nodes in the destination MGMT Cluster\" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 done",
"Just in case export KUBECONFIG=USD{MGMT_KUBECONFIG} Scale down deployments oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all sleep 15",
"NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName==\"'USD{HC_CLUSTER_NAME}'\")].metadata.name}') if [[ ! -z \"USD{NODEPOOLS}\" ]];then oc patch -n \"USD{HC_CLUSTER_NS}\" nodepool USD{NODEPOOLS} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi",
"Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true",
"Cluster C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done",
"Delete HCP and ControlPlane HC NS oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true",
"Delete HC and HC Namespace oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{\"metadata\":{\"finalizers\":null}}' --type merge || true oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true oc delete ns USD{HC_CLUSTER_NS} || true",
"Validations export KUBECONFIG=USD{MGMT2_KUBECONFIG} oc get hc -n USD{HC_CLUSTER_NS} oc get np -n USD{HC_CLUSTER_NS} oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Inside the HostedCluster export KUBECONFIG=USD{HC_KUBECONFIG} oc get clusterversion oc get nodes",
"oc delete pod -n openshift-ovn-kubernetes --all",
"oc adm must-gather --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v<mce_version> /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"<timestamp>\"}}' --type=merge 1",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":null}}' --type=merge",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: arch: amd64 clusterName: clustername 1 management: autoRepair: false replace: rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: RollingUpdate upgradeType: Replace nodeDrainTimeout: 0s 2",
"oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0",
"oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1",
"oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/hosted_control_planes/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_openshift_container_platform/proc_providing-feedback-on-red-hat-documentation_getting-started-with-dotnet-on-openshift |
Chapter 1. Introduction to scaling storage | Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure VMware Bare metal | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/scaling_storage/scaling-overview_rhodf |
Appendix D. Create an Amazon S3 bucket | Appendix D. Create an Amazon S3 bucket Open a terminal and ensure that the AWS CLI is installed and configured with your AWS credentials. Run the following command to create a new S3 bucket: Warning The bucket name must be a unique name. Run the following command to check that the bucket has been successfully created | [
"aws s3 mb s3://<bucket-name> --region <region-name>",
"aws s3 ls | grep <bucket-name>"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/bucket_creation |
Part V. Deprecated Functionality | Part V. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases up to Red Hat Enterprise Linux 7.4. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/part-red_hat_enterprise_linux-7.4_release_notes-deprecated_functionality |
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] | Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the desired quota status object Status defines the actual enforced quota and its current usage 3.1.1. .spec Description Spec defines the desired quota Type object Required quota selector Property Type Description quota object Quota defines the desired quota selector object Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. 3.1.2. .spec.quota Description Quota defines the desired quota Type object Property Type Description hard integer-or-string hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 3.1.3. .spec.quota.scopeSelector Description scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 3.1.4. .spec.quota.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 3.1.5. .spec.quota.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required operator scopeName Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. scopeName string The name of the scope that the selector applies to. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.selector Description Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. Type object Property Type Description annotations undefined (string) AnnotationSelector is used to select projects by annotation. labels `` LabelSelector is used to select projects by label. 3.1.7. .status Description Status defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces `` Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. total object Total defines the actual enforced quota and its current usage across all projects 3.1.8. .status.total Description Total defines the actual enforced quota and its current usage across all projects Type object Property Type Description hard integer-or-string Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used integer-or-string Used is the current observed total usage of the resource in the namespace. 3.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/clusterresourcequotas DELETE : delete collection of ClusterResourceQuota GET : list objects of kind ClusterResourceQuota POST : create a ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas GET : watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} DELETE : delete a ClusterResourceQuota GET : read the specified ClusterResourceQuota PATCH : partially update the specified ClusterResourceQuota PUT : replace the specified ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} GET : watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status GET : read status of the specified ClusterResourceQuota PATCH : partially update status of the specified ClusterResourceQuota PUT : replace status of the specified ClusterResourceQuota 3.2.1. /apis/quota.openshift.io/v1/clusterresourcequotas Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterResourceQuota Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterResourceQuota Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterResourceQuota Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 202 - Accepted ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.2. /apis/quota.openshift.io/v1/watch/clusterresourcequotas Table 3.9. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} Table 3.11. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.12. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterResourceQuota Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.14. Body parameters Parameter Type Description body DeleteOptions schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterResourceQuota Table 3.16. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.17. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterResourceQuota Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterResourceQuota Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.4. /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status Table 3.27. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota Table 3.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterResourceQuota Table 3.29. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.30. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterResourceQuota Table 3.31. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.32. Body parameters Parameter Type Description body Patch schema Table 3.33. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterResourceQuota Table 3.34. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.35. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.36. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1 |
C.2. Encrypting Block Devices Using dm-crypt/LUKS6tit | C.2. Encrypting Block Devices Using dm-crypt/LUKS6tit Linux Unified Key Setup (LUKS) is a specification for block device encryption. It establishes an on-disk format for the data, as well as a passphrase/key management policy. LUKS uses the kernel device mapper subsystem via the dm-crypt module. This arrangement provides a low-level mapping that handles encryption and decryption of the device's data. User-level operations, such as creating and accessing encrypted devices, are accomplished through the use of the cryptsetup utility. C.2.1. Overview of LUKS What LUKS does: LUKS encrypts entire block devices LUKS is thereby well-suited for protecting the contents of mobile devices such as: Removable storage media Laptop disk drives The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. This is the same subsystem used by LVM, so it is well tested. LUKS provides passphrase strengthening. This protects against dictionary attacks. LUKS devices contain multiple key slots. This allows users to add backup keys/passphrases. What LUKS does not do: LUKS is not well-suited for applications requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. More detailed information about LUKS is available from the project website at http://code.google.com/p/cryptsetup/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs02 |
Chapter 4. Understanding persistent storage | Chapter 4. Understanding persistent storage Managing storage is a distinct problem from managing compute resources. MicroShift uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. 4.1. Control permissions with security context constraints You can use security context constraints (SCCs) to control permissions for the pods in your cluster. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. For more information see Managing security context constraints . Important Only RWO volume mounts are supported. SCC could be blocked if pods are not operating with the SCC contexts. 4.2. Persistent storage overview PVCs are specific to a namespace, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single namespace; they can be shared across the entire Red Hat build of MicroShift cluster and claimed from any namespace. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that LVM, the host filesystem such as hostpath, or raw block devices. Important High availability of storage in the infrastructure is left to the underlying storage provider. Like PersistentVolumes , PersistentVolumeClaims (PVCs) are API objects, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. Access modes supported by OpenShift Container Platform are also definable in Red Hat build of MicroShift. However, because Red Hat build of MicroShift does not support multi-node deployments, only ReadWriteOnce (RWO) is pertinent. 4.3. Additional resources Access modes for persistent storage 4.4. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 4.4.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. 4.4.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, Red Hat build of MicroShift binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 4.4.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 4.4.4. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 4.4.5. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plugins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in Red Hat build of MicroShift 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from Red Hat build of MicroShift and the associated storage asset in external infrastructure, such as Amazon Elastic Block Store (Amazon EBS) or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 4.4.6. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the underlying logical volume is handled according to the reclaimPolicy . Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 4.4.7. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 4.5. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. You can view the name of a PVC that is bound to a PV by running the following command: USD oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}' 4.5.1. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 4.5.2. Supported access modes LVMS is the only CSI plugin Red Hat build of MicroShift supports. The hostPath and LVs built in to OpenShift Container Platform also support RWO. 4.5.3. Phase Volumes can be found in one of the following phases: Table 4.1. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. 4.5.3.1. Last phase transition time The LastPhaseTransitionTime field has a timestamp that updates every time a persistent volume (PV) transitions to a different phase ( pv.Status.Phase ). To find the time of the last phase transition for a PV, run the following command: USD oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1 1 Specify the name of the PV that you want to see the last phase transition. 4.5.3.2. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: topolvm-provisioner mountOptions: - uid=1500 - gid=1500 parameters: csi.storage.k8s.io/fstype: xfs provisioner: topolvm.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true Note mountOptions are not validated. Incorrect values will cause the mount to fail and an event to be logged to the PVC. 4.6. Persistent volumes with RWO access mode permissions Persistent volume claims (PVCs) can be created with different access modes. A PVC with the ReadWriteOnce (RWO) access mode set allows multiple pods on the same node to read or write into the same PV at once. There are instances when the pods of the same node are not able to read or write into the same PV. This happens when the pods in the node do not have the same SELinux context. Persistent volumes can be mounted, then later claimed by PVCs, with the RWO access mode. 4.7. Checking the pods for mismatch Check if the pods have a mismatch by using the following procedure. Important Replace <pod_name_a> with the name of the first pod in the following procedure. Replace <pod_name_b> with the name of the second pod in the following procedure. Replace <pvc_mountpoint> with the mount point within the pods. Procedure List the mount point within the first pod by running the following command: USD oc get pods -n <pod_name_a> -ojsonpath='{.spec.containers[ ].volumeMounts[ ].mountPath}' 1 1 Replace <pod_name_a> with the name of the first pod. Example output /files /var/run/secrets/kubernetes.io/serviceaccount List the mount point within the second pod by running the following command: USD oc get pods -n <pod_name_b> -ojsonpath='{.spec.containers[ ].volumeMounts[ ].mountPath}' 1 1 Replace <pod_name_b> with the name of the second pod. Example output /files /var/run/secrets/kubernetes.io/serviceaccount Check the context and permissions inside the first pod by running the following command: USD oc rsh <pod_name_a> ls -lZah <pvc_mountpoint> 1 1 Replace <pod_name_a> with the name of the first pod and replace <pvc_mountpoint> with the mount point within the first pod. Example output total 12K dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c398,c806 40 Feb 17 13:36 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c398,c806 40 Feb 17 13:36 .. [...] Check the context and permissions inside the second pod by running the following command: USD oc rsh <pod_name_b> ls -lZah <pvc_mountpoint> 1 1 Replace <pod_name_b> with the name of the second pod and replace <pvc_mountpoint> with the mount point within the second pod. Example output total 12K dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c15,c25 40 Feb 17 13:34 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c15,c25 40 Feb 17 13:34 .. [...] Compare both the outputs to check if there is a mismatch of SELinux context. 4.8. Updating the pods which have mismatch Update the SELinux context of the pods if a mismatch is found by using the following procedure. Procedure When there is a mismatch of the SELinux content, create a new security context constraint (SCC) and assign it to both pods. To create a SCC, see Creating security context constraints . Update the SELinux context as shown in the following example: Example output [...] securityContext:privileged seLinuxOptions:MustRunAs level: "s0:cXX,cYY" [...] 4.9. Verifying pods after resolving a mismatch Verify the security context constraint (SCC) and the SELinux label of both the pods by using the following verification steps. Verification Verify that the same SCC is assigned to the first pod by running the following command: USD oc describe pod <pod_name_a> |grep -i scc 1 1 Replace <pod_name_a> with the name of the first pod. Example output openshift.io/scc: restricted Verify that the same SCC is assigned to first second pod by running the following command: USD oc describe pod <pod_name_b> |grep -i scc 1 1 Replace <pod_name_b> with the name of the second pod. Example output openshift.io/scc: restricted Verify that the same SELinux label is applied to first pod by running the following command: USD oc exec <pod_name_a> -- ls -laZ <pvc_mountpoint> 1 1 Replace <pod_name_a> with the name of the first pod and replace <pvc_mountpoint> with the mount point within the first pod. Example output total 4 drwxrwsrwx. 2 root 1000670000 system_u:object_r:container_file_t:s0:c10,c26 19 Aug 29 18:17 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c10,c26 61 Aug 29 18:16 .. -rw-rw-rw-. 1 1000670000 1000670000 system_u:object_r:container_file_t:s0:c10,c26 29 Aug 29 18:17 test1 [...] Verify that the same SELinux label is applied to second pod by running the following command: USD oc exec <pod_name_b> -- ls -laZ <pvc_mountpoint> 1 1 Replace <pod_name_b> with the name of the second pod and replace <pvc_mountpoint> with the mount point within the second pod. Example output total 4 drwxrwsrwx. 2 root 1000670000 system_u:object_r:container_file_t:s0:c10,c26 19 Aug 29 18:17 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c10,c26 61 Aug 29 18:16 .. -rw-rw-rw-. 1 1000670000 1000670000 system_u:object_r:container_file_t:s0:c10,c26 29 Aug 29 18:17 test1 [...] Additional resources Common mount options 4.10. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC. 2 The access mode, defining the read-write and mount permissions. 3 The amount of storage available to the PVC. 4 Name of the StorageClass required by the claim. 4.10.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 4.10.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 4.10.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 4.10.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 4.11. Using fsGroup to reduce pod timeouts If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts. This can occur because, by default, Red Hat build of MicroShift recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a pod's securityContext when that volume is mounted. For large volumes, checking and changing ownership and permissions can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy field inside a securityContext to control the way that Red Hat build of MicroShift checks and manages ownership and permissions for a volume. fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a pod. This field only applies to volume types that support fsGroup -controlled ownership and permissions. This field has two possible values: OnRootMismatch : Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts. Always : Always change permission and ownership of the volume when a volume is mounted. fsGroupChangePolicy example securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: "OnRootMismatch" 1 ... 1 OnRootMismatch specifies skipping recursive permission change, thus helping to avoid pod timeout problems. Note The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir. | [
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}'",
"oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\" name: topolvm-provisioner mountOptions: - uid=1500 - gid=1500 parameters: csi.storage.k8s.io/fstype: xfs provisioner: topolvm.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true",
"oc get pods -n <pod_name_a> -ojsonpath='{.spec.containers[ ].volumeMounts[ ].mountPath}' 1",
"/files /var/run/secrets/kubernetes.io/serviceaccount",
"oc get pods -n <pod_name_b> -ojsonpath='{.spec.containers[ ].volumeMounts[ ].mountPath}' 1",
"/files /var/run/secrets/kubernetes.io/serviceaccount",
"oc rsh <pod_name_a> ls -lZah <pvc_mountpoint> 1",
"total 12K dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c398,c806 40 Feb 17 13:36 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c398,c806 40 Feb 17 13:36 .. [...]",
"oc rsh <pod_name_b> ls -lZah <pvc_mountpoint> 1",
"total 12K dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c15,c25 40 Feb 17 13:34 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c15,c25 40 Feb 17 13:34 .. [...]",
"[...] securityContext:privileged seLinuxOptions:MustRunAs level: \"s0:cXX,cYY\" [...]",
"oc describe pod <pod_name_a> |grep -i scc 1",
"openshift.io/scc: restricted",
"oc describe pod <pod_name_b> |grep -i scc 1",
"openshift.io/scc: restricted",
"oc exec <pod_name_a> -- ls -laZ <pvc_mountpoint> 1",
"total 4 drwxrwsrwx. 2 root 1000670000 system_u:object_r:container_file_t:s0:c10,c26 19 Aug 29 18:17 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c10,c26 61 Aug 29 18:16 .. -rw-rw-rw-. 1 1000670000 1000670000 system_u:object_r:container_file_t:s0:c10,c26 29 Aug 29 18:17 test1 [...]",
"oc exec <pod_name_b> -- ls -laZ <pvc_mountpoint> 1",
"total 4 drwxrwsrwx. 2 root 1000670000 system_u:object_r:container_file_t:s0:c10,c26 19 Aug 29 18:17 . dr-xr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c10,c26 61 Aug 29 18:16 .. -rw-rw-rw-. 1 1000670000 1000670000 system_u:object_r:container_file_t:s0:c10,c26 29 Aug 29 18:17 test1 [...]",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/storage/understanding-persistent-storage-microshift |
Chapter 14. Failover Deployments | Chapter 14. Failover Deployments Abstract Red Hat Fuse provides failover capability using either a simple lock file system or a JDBC locking mechanism. In both cases, a container-level lock system allows bundles to be preloaded into a secondary kernel instance in order to provide faster failover performance. 14.1. Using a Simple Lock File System Overview When you first start Red Hat Fuse a lock file is created at the root of the installation directory. You can set up a primary/secondary system whereby if the primary instance fails, the lock is passed to a secondary instance that resides on the same host machine. Configuring a lock file system To configure a lock file failover deployment, edit the etc/system.properties file on both the primary and the secondary installation to include the properties in Example 14.1, "Lock File Failover Configuration" . Example 14.1. Lock File Failover Configuration karaf.lock -specifies whether the lock file is written. karaf.lock.class -specifies the Java class implementing the lock. For a simple file lock it should always be org.apache.karaf.main.SimpleFileLock . karaf.lock.dir -specifies the directory into which the lock file is written. This must be the same for both the primary and the secondary installation. karaf.lock.delay -specifies, in milliseconds, the delay between attempts to reaquire the lock. 14.2. Using a JDBC Lock System Overview The JDBC locking mechanism is intended for failover deployments where Red Hat Fuse instances exist on separate machines. In this scenario, the primary instance holds a lock on a locking table hosted on a database. If the primary instance loses the lock, a waiting secondary process gains access to the locking table and fully starts its container. Adding the JDBC driver to the classpath In a JDBC locking system, the JDBC driver needs to be on the classpath for each instance in the primary/secondary setup. Add the JDBC driver to the classpath as follows: Copy the JDBC driver JAR file to the ESBInstallDir /lib/ext directory for each Red Hat Fuse instance. Modify the bin/karaf start script so that it includes the JDBC driver JAR in its CLASSPATH variable. For example, given the JDBC JAR file, JDBCJarFile .jar , you could modify the start script as follows (on a *NIX operating system): Note If you are adding a MySQL driver JAR or a PostgreSQL driver JAR, you must rename the driver JAR by prefixing it with the karaf- prefix. Otherwise, Apache Karaf will hang and the log will tell you that Apache Karaf was unable to find the driver. Configuring a JDBC lock system To configure a JDBC lock system, update the etc/system.properties file for each instance in the primary/secondary deployment as shown Example 14.2. JDBC Lock File Configuration In the example, a database named sample will be created if it does not already exist. The first Red Hat Fuse instance to acquire the locking table is the primary instance. If the connection to the database is lost, the primary instance tries to gracefully shutdown, allowing a secondary instance to become the primary instance when the database service is restored. The former primary instance will require manual restart. Configuring JDBC locking on Oracle If you are using Oracle as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.OracleJDBCLock . Otherwise, configure the system.properties file as normal for your setup, as shown: Example 14.3. JDBC Lock File Configuration for Oracle Note The karaf.lock.jdbc.url requires an active Oracle system ID (SID). This means you must manually create a database instance before using this particular lock. Configuring JDBC locking on Derby If you are using Derby as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file should point to org.apache.karaf.main.lock.DerbyJDBCLock . For example, you could configure the system.properties file as shown: Example 14.4. JDBC Lock File Configuration for Derby Configuring JDBC locking on MySQL If you are using MySQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.MySQLJDBCLock . For example, you could configure the system.properties file as shown: Example 14.5. JDBC Lock File Configuration for MySQL Configuring JDBC locking on PostgreSQL If you are using PostgreSQL as your database in a JDBC locking scenario, the karaf.lock.class property in the etc/system.properties file must point to org.apache.karaf.main.lock.PostgreSQLJDBCLock . For example, you could configure the system.properties file as shown: Example 14.6. JDBC Lock File Configuration for PostgreSQL JDBC lock classes The following JDBC lock classes are currently provided by Apache Karaf: 14.3. Container-level Locking Overview Container-level locking allows bundles to be preloaded into the secondary kernel instance in order to provide faster failover performance. Container-level locking is supported in both the simple file and JDBC locking mechanisms. Configuring container-level locking To implement container-level locking, add the following to the etc/system.properties file on each system in the primary/secondary setup: Example 14.7. Container-level Locking Configuration The karaf.lock.level property tells the Red Hat Fuse instance how far up the boot process to bring the OSGi container. Bundles assigned the same start level or lower will then also be started in that Fuse instance. Bundle start levels are specified in etc/startup.properties , in the format BundleName .jar=level . The core system bundles have levels below 50, where as user bundles have levels greater than 50. Table 14.1. Bundle Start Levels Start Level Behavior 1 A 'cold' standby instance. Core bundles are not loaded into container. Secondary instances will wait until lock acquired to start server. <50 A 'hot' standby instance. Core bundles are loaded into the container. Secondary instances will wait until lock acquired to start user level bundles. The console will be accessible for each secondary instance at this level. >50 This setting is not recommended as user bundles will be started. Avoiding port conflicts When using a 'hot' spare on the same host you need to set the JMX remote port to a unique value to avoid bind conflicts. You can edit the fuse start script (or the karaf script on a child instance) to include the following: | [
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.SimpleFileLock karaf.lock.dir= PathToLockFileDirectory karaf.lock.delay=10000",
"# Add the jars in the lib dir for file in \"USDKARAF_HOME\"/lib/karaf*.jar do if [ -z \"USDCLASSPATH\" ]; then CLASSPATH=\"USDfile\" else CLASSPATH=\"USDCLASSPATH:USDfile\" fi done CLASSPATH=\"USDCLASSPATH:USDKARAF_HOME/lib/JDBCJarFile.jar \"",
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.DefaultJDBCLock karaf.lock.level=50 karaf.lock.delay=10000 karaf.lock.jdbc.url=jdbc:derby://dbserver:1527/sample karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30",
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.OracleJDBCLock karaf.lock.jdbc.url=jdbc:oracle:thin:@hostname:1521:XE karaf.lock.jdbc.driver=oracle.jdbc.OracleDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30",
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.DerbyJDBCLock karaf.lock.jdbc.url=jdbc:derby://127.0.0.1:1527/dbname karaf.lock.jdbc.driver=org.apache.derby.jdbc.ClientDriver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30",
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.MySQLJDBCLock karaf.lock.jdbc.url=jdbc:mysql://127.0.0.1:3306/dbname karaf.lock.jdbc.driver=com.mysql.jdbc.Driver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=30",
"karaf.lock=true karaf.lock.class=org.apache.karaf.main.lock.PostgreSQLJDBCLock karaf.lock.jdbc.url=jdbc:postgresql://127.0.0.1:5432/dbname karaf.lock.jdbc.driver=org.postgresql.Driver karaf.lock.jdbc.user=user karaf.lock.jdbc.password=password karaf.lock.jdbc.table=KARAF_LOCK karaf.lock.jdbc.clustername=karaf karaf.lock.jdbc.timeout=0",
"org.apache.karaf.main.lock.DefaultJDBCLock org.apache.karaf.main.lock.DerbyJDBCLock org.apache.karaf.main.lock.MySQLJDBCLock org.apache.karaf.main.lock.OracleJDBCLock org.apache.karaf.main.lock.PostgreSQLJDBCLock",
"karaf.lock=true karaf.lock.level=50 karaf.lock.delay=10000",
"DEFAULT_JAVA_OPTS=\"-server USDDEFAULT_JAVA_OPTS -Dcom.sun.management.jmxremote.port=1100 -Dcom.sun.management.jmxremote.authenticate=false\""
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/ESBRuntimeFailover |
8.19. coolkey | 8.19. coolkey 8.19.1. RHBA-2013:1699 - coolkey bug fix and enhancement update Updated coolkey packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. Coolkey is a smart card support library for the CoolKey, Common Access Card (CAC), and Personal Identity Verification (PIV) smart cards. Bug Fixes BZ# 806038 In versions, coolkey always created a bogus e-gate smart card reader to avoid problems with Network Security Services (NSS) and the PC/SC Lite framework when no smart card reader was available. However, e-gate smart cards are no longer available for smart card authentication, and the NSS and pcsc-lite packages have been updated to handle a situation with no e-gate reader attached. Therefore, this bogus reader in coolkey became unnecessary and could cause problems to some applications under certain circumstances. This update modifies the respective code so that coolkey no longer creates a bogus e-gate smart card. BZ# 906537 With a version of coolkey, some signature operations, such as PKINIT, could fail on PIV endpoint cards that support both CAC and PIV interfaces. The underlying coolkey code has been modified so these PIV endpoint cards now works with coolkey as expected. BZ# 991515 The coolkey library registered only with the NSS DBM database, however, NSS now uses also the SQLite database format, which is preferred. This update modifies coolkey to register properly with both NSS databases. Enhancement BZ# 951272 Support for tokens containing Elliptic Curve Cryptography (ECC) certificates has been added to the coolkey packages so the coolkey library now works with ECC provisioned cards. Users of coolkey are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/coolkey |
Chapter 5. Creating and configuring a Jira connection | Chapter 5. Creating and configuring a Jira connection You can track application migrations by creating a Jira issue for each migration from within the MTA user interface. To be able to create Jira issues, you first need to do the following: Create an MTA credential to authenticate to the API of the Jira instance that you create in the step. Create a Jira instance in MTA and establish a connection to that instance. 5.1. Configuring Jira credentials To define a Jira instance in MTA and establish a connection to that instance, you must first create an MTA credential to authenticate to the Jira instance's API. Two types of credentials are available: Basic auth - for Jira Cloud and a private Jira server or data center Bearer Token - for a private Jira server or data center To create an MTA credential, follow the procedure below. Procedure In Administration view, click Credentials . The Credentials page opens. Click Create new . Enter the following information: Name Description (optional) In the Type list, select Basic Auth (Jira) or Bearer Token (Jira) : If you selected Basic Auth (Jira) , proceed as follows: In the Email field, enter your email. In the Token field, depending on the specific Jira configuration, enter either your token generated on the Jira site or your Jira login password. Note To obtain a Jira token, you need to log in to the Jira site. Click Save . The new credential appears on the Credentials page. If you selected Bearer Token (Jira) , proceed as follows: In the Token field, enter your token generated on the Jira site. Click Save . The new credential appears on the Credentials page. You can edit a credential by clicking Edit . To delete a credential, click Delete . Note You cannot delete a credential that has already been assigned to a Jira connection instance. 5.2. Creating and configuring a Jira connection To create a Jira instance in MTA and establish a connection to that instance, follow the procedure below. Procedure In Administration view, under Issue Management , click Jira . The Jira configuration page opens. Click Create new . The New instance window opens. Enter the following information: Name of the instance URL of the web interface of your Jira account Instance type - select either Jira Cloud or Jira Server/Data center from the list Credentials - select from the list Note If the selected instance type is Jira Cloud , only Basic Auth credentials are displayed in the list. If the selected instance type is Jira Server/Data center , both Basic Auth and Token Bearer credentials are displayed. Choose the type that is appropriate for the particular configuration of your Jira server or data center. By default, a connection cannot be established with a server with an invalid certificate. To override this restriction, toggle the Enable insecure communication switch. Click Create . The new connection instance appears on the Jira configuration page. Once the connection has been established and authorized, the status in the Connection column becomes Connected . If the Connection status becomes Not connected , click the status to see the reason for the error. The Jira configuration table has filtering by Name and URL and sorting by Instance name and URL . Note A Jira connection that was used for creating issues for a migration wave cannot be removed as long as the issues exist in Jira, even after the migration wave is deleted. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/user_interface_guide/creating-configuring-jira-connection |
8.3. Solid-State Disks | 8.3. Solid-State Disks Solid-state disks (SSD) use NAND flash chips rather than rotating magnetic platters to store persistent data. They provide a constant access time for data across their full Logical Block Address range, and do not incur measurable seek costs like their rotating counterparts. They are more expensive per gigabyte of storage space and have a lesser storage density, but they also have lower latency and greater throughput than HDDs. Performance generally degrades as the used blocks on an SSD approach the capacity of the disk. The degree of degradation varies by vendor, but all devices experience degradation in this circumstance. Enabling discard behavior can help to alleviate this degradation. For details, see Section 8.1.3.3, "Maintenance" . The default I/O scheduler and virtual memory options are suitable for use with SSDs. For more information on SSD, see the Solid-State Disk Deployment Guidelines chapter in the Red Hat Enterprise Linux 7 Storage Administration Guide . SSD Tuning Considerations Consider the following factors when configuring settings that can affect SSD performance: I/O Scheduler Any I/O scheduler is expected to perform well with most SSDs. However, as with any other storage type, Red Hat recommends benchmarking to determine the optimal configuration for a given workload. When using SSDs, Red Hat advises changing the I/O scheduler only for benchmarking particular workloads. For instructions on how to switch between I/O schedulers, see the /usr/share/doc/kernel- version /Documentation/block/switching-sched.txt file. As of Red Hat Enterprise Linuxnbsp 7.0, the default I/O scheduler is Deadline, except for use with SATA drives, which use CFQ as the default I/O scheduler. For faster storage, Deadline can outperform CFQ leading to better I/O performance without the need for specific tuning. Sometimes, the default is not suitable for certain disks, such as SAS rotational disks. In such cases, change the I/O scheduler to CFQ. Virtual Memory Like the I/O scheduler, virtual memory (VM) subsystem requires no special tuning. Given the fast nature of I/O on SSD, try turning down the vm_dirty_background_ratio and vm_dirty_ratio settings, as increased write-out activity does not usually have a negative impact on the latency of other operations on the disk. However, this tuning can generate more overall I/O , and is therefore not generally recommended without workload-specific testing. Swap An SSD can also be used as a swap device, and is likely to produce good page-out and page-in performance. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-considerations-solid_state_disks |
Chapter 10. Viewing and managing JMX domains and MBeans | Chapter 10. Viewing and managing JMX domains and MBeans Java Management Extensions (JMX) is a Java technology that allows you to manage resources (services, devices, and applications) dynamically at runtime. The resources are represented by objects called MBeans (for Managed Bean). You can manage and monitor resources as soon as they are created, implemented, or installed. With the JMX plugin on the Fuse Console, you can view and manage JMX domains and MBeans. You can view MBean attributes, run commands, and create charts that show statistics for the MBeans. The JMX tab provides a tree view of the active JMX domains and MBeans organized in folders. You can view details and execute commands on the MBeans. Procedure To view and edit MBean attributes: In the tree view, select an MBean. Click the Attributes tab. Click an attribute to see its details. To perform operations: In the tree view, select an MBean. Click the Operations tab, expand one of the listed operations. Click Execute to run the operation. To view charts: In the tree view, select an item. Click the Chart tab. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/fuse-console-view-jmx-all_karaf |
Observability overview | Observability overview OpenShift Container Platform 4.16 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/observability_overview/index |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.